All Episodes

March 6, 2026 41 mins

AI!! Today, we dive into the often-overlooked implications of AI beyond its productivity perks. Joined by my friend and AI specialist David Lobue, we explore AI's evolution, biases, real-world applications, and the massive energy demands it's placing on society. We cover everything from language models to corporate control, and how powerhouses like Elon Musk and tools like Grok and ChatGPT are shaping the way we work and produce. 

 

@LyndaMick

@RogueRecap

roguerecap.com

#AI #Grok #Energy

See omnystudio.com/listener for privacy information.

Listen
Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:07):
Welcome to the Rogue Recap.

Speaker 2 (00:09):
Hot takes, cold facts, and zero respect for the official narrative.

Speaker 1 (00:15):
Sit back, roll your eyes, and let's recap rogue style.
What's up, everybody, and welcome to the Roague Recap. I
am Lynda McLaughlin, your host. Here, I am again at
Linda Mick at Rogue Recap roguerecap dot com. You know
what to do, YadA, YadA, YadA. So there's a million
things going on, like I say every single day because
it's true, so I think it bears repeating. But one

(00:36):
of the things that is not getting discussed is AI.
So we're talking about all the cool things we can
do with it, all the cool things it can do
for us to help us with time management and expediting projects,
and that's all wonderful, But what else is it doing?
I think that's my question. And I am certainly not
an expert. I use it every day because I work

(00:56):
in media, as most of you know. But I have
a dear friend I call doctor Dave. He's telling me
he's not a doctor. I think it's fake news, but
he's telling me he's not. So I'm going to leave
that there. But his name is Dave. Laboue. He's wicked smart.
He's got about seventy eight master's degrees in artificial intelligence
and language algorithms and all sorts of sciences that regular
people just go, Hey, I need a guy for that,

(01:18):
and he's the guy. So Dave, welcome to the show.

Speaker 2 (01:21):
Thank you, happy to be here.

Speaker 1 (01:22):
Linda, good Glad for those of you who you know
haven't listened to my show. I like to bring my
friends on. Dave is a great friend, and he just
happens to be super wicked smart. So I said to him,
can you come on and talk to me about AI
because I'm seeing all this news come out and one
of the biggest things that people are asking is how

(01:45):
the hell are we going to run this? We don't
have the energy. And so I told Dave, I said,
I want to talk about what the president is saying.
I want to talk about what Elon Musk is saying,
but I also want to talk about the functionality of
how this really happens. So I'm going to Da've talked
to us for a hot second about his background so
that we all know why he's worthy of talking about
these things, and then we're going to get into what

(02:07):
happens next here, you know, for the regular American citizen
on their electric bill, to how they use AI in
everyday life. So Dave, tell us a little bit about
your background and why you love to nerd out on
this stuff.

Speaker 2 (02:20):
Yeah, sure thing. And for my background, it comes from
a combination of reason, philosophy, mathematics, data science, and artificial intelligence.
So I've seen the evolution of language processing as it's
come through different algorithms, and the world we're living in
today is really a quantum leap from what we saw

(02:41):
ten to fifteen years ago. And from a neuroscience standpoint,
from a language and linguistic understanding standpoint, from a compute standpoint,
all of these areas have evolved very rapidly in the
last five years, and we're seeing major increases on what
these AI algorithms can do the large language models, who

(03:05):
owns them, who's developed them, who has the control over them,
how they train them, and what that means for us
as users to interact with them. So there's a lot
of open questions that we have to go through and
think about at a deeper level, both in terms of
what kind of dependency and interaction do we want to
have with these ais.

Speaker 1 (03:27):
So can I interrupt you on that point? Can you
explain to us in layman's terms? What is language processing?

Speaker 3 (03:35):
Sure?

Speaker 2 (03:35):
So, language processing? The quick background is you're taking text
as it is typed in a computer, or word documents
or anything that you use digitally. And about ten to
fifteen years ago, we were basically looking at collections of words,
strings of words, how do they relate to each other?
What combinations words are used together? To identify likeness with

(04:00):
the something that came out of actually Google Brain, which
was the paper written called the Attention is All you
need to develop the GPT revolution that we know about.
Open Ai now created that and evolved it. It's sort
of a major breakthrough that allows language processing to understand context.

(04:21):
So what that means is, as I'm speaking right now,
I'm thinking about the next word or set of words
that I'm going to deliver, and that's based on the
previous words that I've used and sort of my background
and knowledge, and language models now have the ability to
look backwards in history that they never had the ability
to do before. They were only looking at sort of
certain strings of text place side by side, or certain documents.

(04:46):
And this relates to the energy usage. So we're looking
now not just in a single, very narrow sense across
you know, hundreds or thousands of documents. We're looking across
documents that spanned all digital information potentially, so anything we
have online that's and scanned and catalog and organized that
is now available memory to generate the next word that's

(05:06):
being predicted as you talk to an LLF. So it's
been quite a change in how we can use these models,
what we can expect from them, and what they're able
to do and to evolve from that. Language models are
no longer just about language. They have multimodal aspects. As
we know, we can use image generation, voice generation, you

(05:30):
can do image to text, text image, image to image,
and they understand all these different modalities in a deep
way that humans would. So we're not quite at general
intelligence for AI's still a long shot to go there,
but we have a lot of advanced understanding for these
algorithms to be able to understand an image, understand a video,

(05:54):
modify it or edit it and give us back something
based on what we're requesting of it. It's gone from
just language to basically anything digital now can be used
through a language model.

Speaker 1 (06:05):
Okay, So two questions to that. The first thing is
when you say it's collecting information, just how you and
I are having a conversation and we're picking and choosing
the words we're going to say next based upon the
word said previously. No, we're saying hmm, we're going to
go here, We're going to go there. I want to
ask this question, are these models, these AI models, are

(06:27):
they pulling from data sets that it's getting from inputs
all over the world, or is it just pulling from
the data set given to it in a particular user moment,
so a combination.

Speaker 2 (06:40):
Up until recently, that was how they were trained, quote unquote,
So you had large open web documents, basically anything that
was open source to the web or digitally for that matter. Actually,
very private data is very valuable to anything that is
not broadly available in a very specific niche area is

(07:02):
a valuable data asset for language models because those are
areas that they haven't been trained on. But otherwise more generally,
you know, after about the GPT from open ai came
out in twenty twenty, twenty twenty one, Yeah, a lot
of that energy was used in just training of all
of the data that was available on the web. People
were scraping Twitter at the time, bringing those conversations in

(07:25):
using different social media, Reddit post all that was brought in.
That was training the model that was basically building its
brain of how to think, not to reference when it's
constructing its sentences about what it knows. So obviously there's
a bias there if you scrape through information from a
certain leaning area, that's all it's going to know. So
balanced understanding and sort of training of these models is

(07:48):
very important. But to get back to your point now
that we've sort of evolved beyond the base training of
these language models or sort of called the fun foundational models,
a lot of them do integrate real time data, especially
groc that's a great segue from x XAI into groc AI.
You have the ability for that LLM, that algorithm to

(08:11):
basically feed real time all conversations, news that's posted back
and forth, direct messages, anything that's happening in x can
now be integrated real time into that language model and
used to generate responses. And this goes for other models too,
But it really is about the data they have access
to and how quickly they can reference it and bring
it back to the user.

Speaker 1 (08:32):
So it's interesting and this goes directly to GROC and
I'm looking this up on x as we speak, because
I want to make sure that I quote this accurately
to that exact point of creating, for lack of a
better example or analogy, partisan robots, if you will, right, So,
we're creating the libs and the conservatives in the AI world,

(08:54):
which is so creepy because we've got plenty of them
in the human world. We don't need anymore. But I'll
leave that there. So Mark Benioff of Salesforce, he's the CEO.
I saw this interview yesterday and he says that he
used rock to make the viral image for his dream

(09:15):
Force Salesforce, you know, positioning, because chat gpt refused to
create it. So chat gpt said nope, I won't do it.
And he was like, this is the difference between open
creativity and corporate censorship. And I was like, that's wild.

(09:36):
Like this this interface literally said to you, no, I
won't do this, no, and I will tell you interestingly
enough to your point. We were talking about this the
other day. The ability to go image to image, text
to image, image to video, video to video, or image
to image to create a video, and the prompts right

(09:59):
the language that you're using. So the other day, I
was trying to create an announcement for my daughter and
where she's going to college. So I had a picture
of my daughter with her lacrosetick, and then I had
where she was going to college, and I made the
super cool graphic. But then I took the images that
I made and I had to put them into a

(10:19):
real editing software because I know how to edit, to
piece it all together and do it the way that
I really wanted. What's interesting is I couldn't make it
in AI. I had to do it if you will
buy hand because that particular interface told me that she
was a miner and it wouldn't let me use the
image of her. She was in a lacrosse uniform with

(10:41):
a lacrossetick. There was nothing inappropriate about this image whatsoever.
I did not supply her age. She is a minor,
but I she could have been eighteen right and still
be in high school. But it wouldn't let me do it.
And I thought, wow, that's interesting that it's not allowing me.
It's telling me, no, you're not allowed to make this

(11:01):
so it was like a direct example for me on
a much lower level obviously, because he's using it for Salesforce,
which is taking over so many of the in house
communicat for many many corporations. I don't actually care for it,
but that's a different conversation. But anyways, it's not user friendly,
it's not easy to use. But I thought that was
so interesting that this computer told me.

Speaker 2 (11:23):
No, yeah, And that's a good example of a guardrail
that's on. And they do have that capability. Obviously, the
owners that have built these foundation models, only a few
of them big tech. Basically you've got yeah, Google, Gemini,
You've got now Rock from XAI. You have a few
open source models. But the other one is really Facebook

(11:44):
has their meta Lama models. So there's really a few
paths you can funnel into when you're interacting with modern AI.
Typically they're referencing one of these larger models through a
back end connection, and they control of that. I mean,
you can place things as sort of an intermediary business

(12:06):
lying AI software, but if it's from the source determined
to have this guardrail or that guardrail, they have the
control over what is able to be spoken about. Describe
utilize and there are good use cases, you know, relating
to what you just said about having identified mind children,
right and mental health. That's another one where interactions are

(12:28):
sensitive among children. You want to make sure that the
right conversations are being had, it's not going off the
rails in a negative way. And then the other side
of that is again the balance of what freedoms do
we allow them to maintain, because that's part of you know,
our rights and AI as it becomes a bigger and
broader part of our society, we won't know if we're

(12:49):
interacting with a human behind the screen or but it's
an AI bought or agents, because they will become so
realistic that that agent that bought that AI, if it's
pre built to act a certain way, that bias will
sort of suddenly condition us to adapt. So it's a
form of censorship in a way, yeah, right, a control

(13:15):
and it definitely needs to be monitored, yeah, because it
can happen subtly and gradually and then altogether at some point.

Speaker 1 (13:24):
It's a little scary, you know. And there's a few
other things I want to get to, but one more
one more aspect of this part is I don't know
if you've seen on all of the social media platforms,
whether you're on Insta or you're on TikTok, or you're
on x or whatever. But people showing two examples, right,
they'll show I put this question into GROCK and I

(13:45):
put this question into chat GPT, and the answers are astounding.
It's truly shocking. And I'm like, oh my god. I'm like,
it's like I have It's like I have Trump and
Biden doing my AI. I'm like so creepy that it's
not just like I just don't know where to go
to get the lack of bias, and I don't know

(14:08):
that we can.

Speaker 2 (14:11):
Yeah, and that's a good example. I think some of
the earlier models. I say earlier, you know, two years
ago when Gemini came out, I think they got some
bad pr based on exactly that is, it really went
too far in the direction and it was obvious. Yeah,
you and even generate factual images of history without it
being latly injected with some kind of bias. You will, right, right,

(14:35):
And there are ways too well, there are limits to
how you can work around that. But again, if the
base model, and if we route basically back to three
or four foundation models owned by three or four companies.
They ultimately control the gates of what can and cannot
be done. And it's hard. I mean you can use,

(14:56):
like you said, the prompt engineering, the prompt sort of
adjustments you train or adjust how it responds. Yeah, there
are limits. There are limits if it's guarded from the
back end.

Speaker 1 (15:07):
And to that point, what's interesting is if you're on
groc versus Chat GBT, so on grock, you know, after like,
I'll say something like, I feel like I could write
this better. Right, it's I have it in twelve sentences.
It could probably be for I'll run it through. I
never let I never let anything go out for my

(15:27):
personal round, my business life that's written by a robot.
But I have many times run it through for grammar
or said I want to do this plan. How could
I make this more user friendly? Because you know, to
me it's very obvious, but maybe to somebody else it's
not right. But after you run it through in grock,
it'll say to you, make more concise, make more persuasive,

(15:49):
make more professional. It gives you options so that you
can take that tone somewhere else. With Chat GBT, you're
not getting that It's not that kind of thing at all.
And I will say the conjecture that it puts in
like I'll say, you know, Obama sent you know, one
point seven billion dollars to a run in the final
months of his presidency. This is a fact. There is

(16:11):
no I'm not putting any like context around that. And
it'll come back and it'll give me this filler in
chat GPT of Barack Obama felt that the sanctions needed
to be left, and as a result, he felt like he, what,
what's the word? You're using a lot of feel here.
There's a lot of felt and maybes, and I'm not
interested in that. I'm just asking for like the date,

(16:33):
the time, did it happen, and it can't do it,
whereas Grock is much more like this happened there and
it doesn't matter if it's right or left, which I
think is very interesting.

Speaker 2 (16:43):
Yeah, And I think that's part of what Elon has
said about the development of his model is he wants
it maximally truth seeking, which I mean, what more can
you ask for in an AI than why would seek
the objective truth whether humans can see it or not? Yes,
because implicit bias from human perspective makes its way into

(17:03):
how we think about things, that we write about, things
that we talk about things. So to have sort of
a pre set standard that focuses on truth of above
all else is the ideal approach to AI in the future.
I mean, I think otherwise, subtly and gradually it will
go off the rails and come back to harm us

(17:25):
a society unless it's seeking truth.

Speaker 1 (17:28):
So to that point, I'm going to play a quick
cut that I have of ELIN and I thought was
very apropo for this conversation. We'll talk on the other
side of this.

Speaker 3 (17:36):
If you just press the rock icon on any x post,
analyze THEPLOYD, and research it as much as you want
so you can you can basically have just by tapping
the rock icon, you can assess whether that that post
is the truth, the whole truth, or nothing but the truth,
or whether there's something supplemental you need to be explained.
So I think I think it's actually we've made a

(17:58):
lot of progress towards, yeah, freedom of speech and people
being able to tell whether something is false or not. Propaganda.
The recent update to GROWK is actually I think very
good at piercing through propaganda, and then we used that
latest person of groc to create Crokipedia. It's not just
I think more neutral that then and more accurate than

(18:22):
than Wikipedia, but actually it's a lot more information than
a Wikipedia page.

Speaker 1 (18:26):
So to that point about Wikipedia, which is a very
interesting point that he makes. As somebody that manages talent
and works in media, a lot of my talent has
Wikipedia pages. And one of my clients had said to me,
there's some really weird stuff on my Wikipedia. Can you
go and change it? And I was like, oh yeah, sure,

(18:46):
something I had never done, never even crossed my mind,
and looked, okay, fine, So I go to Wikipedia. It's like,
I mean, literally almost everything in it was not true
about my client, who he was married to, how many
children he had, were he lived, had he'd been arrested
thirty eight times. I'm like, what I mean, this guy
is like a stalwart lawyer married for four years. It

(19:08):
was just and I thought, oh my god, if somebody
googles him and that comes up, they're and be like,
holy shit, you got to be kidding me. But none
of it was true. But because it's open to public edit,
it wasn't protected from any of that, which I thought
was super scary and I didn't know it before that experience.

Speaker 2 (19:32):
Yeah, and that's that is a very concerning aspect of
how these models have been trained historically. Is right, if
there are you know, public figures, there's enough out there
that someone's going to see it or catch it. But
sure if someone has a Wikipedia change or not checking
it every single day or you know, really once a year,
and that false information goes into that page. If an

(19:55):
AI has, as I mentioned, ear you've been trained where
it's reading through documents, and Wikipedia was one of the
earlier knowledge corpuses that it was trained and learned on. Yeah,
that's what it knows. So it doesn't know and I'll
turn to the point of view if that's the only
thing it knows about person A or person B. So
when you ask about that, it'll respond with confidence that

(20:15):
this is exactly what it is, because as far as
the AI is concerned, it is confident it did find
that information, it is returning it, it is retreating it
as is reported. So as according to the AI, that
is definitively factually true. So it's yeah, and this is
this goes into the bias of how do we manage
these ll ms, because if they're not trained on objective

(20:40):
sources like Wrokipedia. If that is objective, I'm sure there's
going to be errors everywhere, unintentional, but of course at
least closer to the truth. Sure that will be the
reference that the language model uses, as opposed to Wikipedia,
where anything could be in there, anything true or false.

Speaker 1 (20:55):
And edited by a stranger.

Speaker 2 (20:56):
Who stranger. Yeah, and if someone is you talking to
that AI, and you know, referencing that, the AI will say, yes,
I am certain that this is true and put that down.
There's no guarantee it's wrong. And then again, if you
don't know that, you're not the wiser, you'll take that, Hey,
this Alb's right about most things, it must also be

(21:17):
right about this.

Speaker 1 (21:18):
It's so crazy, I have to say, I really anyways,
it's something I think we'll need to follow. I just
don't know as this grows exponentially and quickly, I mean literally,
you know, as you spoke to a moment ago, the
way that you are looking at AI as an engineer,

(21:39):
as a scientist, as somebody who has studied this, you
know extensively the amount of changes that are happening and
at the speed at which they're happening. To me as
a user, I'm like, oh my, I couldn't do this
a week ago. Yeah, that to me is a little scary,
right as a user.

Speaker 2 (22:00):
Yeah yeah, And if you lay out the timeline, you
know that there is a little bit of a frighteningly
concerning fast pace to it. But yes, also if we
look back, you know when cell phones rolled out against
the iPhone and all that, it happened like a blinkin
eye and then we just normalized it. I guess the
same type of pattern is happening here where you know,

(22:21):
four years ago, you had a language model that could
create a paragraph and it could start to pass as
you know, pleading college level tests, you know, twenty two
to twenty three. Right then it evolved into okay, now
it's best a PhD.

Speaker 1 (22:36):
It can it can write code, it can do JavaScript,
it's in Python.

Speaker 2 (22:40):
I'm like, what, Yeah, Well that's where we're at now,
where we have we're sort of in the agent aspect
of this evolution. And what agents are is you can
basically take specialized AI and they have the intelligence to
do anything digitally that you can do. So if you
move your mouse on your screen, it can mimic that.

(23:01):
If you go and log into a website. If you
go in and you know, start doing what you were
saying earlier, editing a photo document. It can do all
of that. If you it will ask for your credentials,
It will, It can reason, and it can actually deliver
anything you do on a computer. It can now do
that independently, which has a lot of other security risks associated.

Speaker 1 (23:24):
As it goes into your bank account.

Speaker 2 (23:26):
It can absolutely do that. Yeah, if you give it control,
and I mean a lot of times it does property. Hey,
would you like me to use this and do that?
But it is at the level now where we're no
longer interacting with something that can chat back to us
and refer information and it can tell us things, It
can do things, It can do things. And you think
about how much we can do digitally through our phones,
through our computers that we do and interact with every day.

(23:47):
Now an AI has the ability to do those same
types of tasks, right, and which leads us to, you know,
what's the next evolution of the role We know right
who we're talking to on the other end of a
phone or a digital computer. Because voice generation is getting better, Yes,
before we know it, regular voices voice, your voice will

(24:11):
be available to be used.

Speaker 1 (24:12):
Yes, it's I have to say, I there are aspects
of that, you know. I was talking to somebody other
day and they were saying, you know, there are people
in the trades, whether it's HVAC or electrical or plumbing
or what have you, and they are just marching down
the street. They're like, we're breaking it in because you
people are gonna be out of work. And Mark Cuban
was saying, I saw two things in the last two

(24:34):
days that really made me think. Kevin O'Leary, who obviously
is a real estate mogul among just a serial entrepreneur,
and then Mark Cuban, Who's Mark Cuban. And we'll leave
that there, but they both spoke in the last two
days about AI, the advance of AI and what it
does to the workforce. Kevin O'Leary said it would make

(24:54):
sense for us to look into helping to build the
data centers and and that that would be the next
big business, Like if you can help to facilitate building
out these data centers, managing the data centers, helping to
create the space and the functionality without overtaking, I don't know,
sort of like the same idea of like when people

(25:14):
are like, yeah, we need wind turbines, you're like, you're
a moron. That shit doesn't work and you're killing the
birds and the farms, so knock it off, right, So
we want to get around that. We don't want to
have the data center wind turbine moment. But Mark Cuba
made the point which I thought was really interesting, and
he said, if I were starting out today and I

(25:34):
wasn't handy or in a trade or the kind of
person that was more tactical, right, he said, I would
be a person that would be helping people who are
in single owner operated businesses to learn how to use
AI because a lot of them could use the help
and they just don't know how to use it. And

(25:56):
I thought that's really interesting. He made a comment that
about thirty three out of thirty three million businesses that
we have in the United States, thirty million of them
are single owner operated. They don't have staff, they don't
have anything. It's just them their company and they're doing it.
And I thought, huh, so if they did have the
help of an AI agent, to your point, they could

(26:18):
reduce their workflow right and probably expedite the process of
whatever business they have and I thought, huh, that's interesting,
But then it comes into how are we facilitating the energy?
So to your point, you and I were speaking before
we got started that this is going to take an
exorbitant amount of energy.

Speaker 2 (26:42):
Yeah, absolutely in that scene.

Speaker 1 (26:44):
Yeah yeah.

Speaker 2 (26:47):
Where the sort of AI economy is going is write
down those paths two of what I'll use it to agents.
That's the big buzzword right now. How do you generate
an HVA management agent? For example? Obviously they're not going
to go and install hvacs and do repairs and all that.
All the business operations that are involved with that, or

(27:09):
you know a family owned business or you know someone
who's only up one or two employees, whether it be
like I was saying, answering calls in the future, with
you have voice capabilities, managing your invoices, managing your calendar,
all those ordering parts, all of that can be moved
to an agentic AI that can solve some of those tasks.

(27:31):
I mean, not all at this exact moment, but the
speed we're going at, I mean it's going to be
here very shortly here And those are some examples where
that's a new business. That's if someone creates that business
that the agent that now solves or helps with you know,
marketing plans or outreach and things like that. You scale

(27:51):
that across that's not now energy usage. So you're creating
not only the original lllms being used by meta by
right by open AI, but now the intermediary businesses who
are using those ais to solve a practical human problem.
And that's all energy demand. So as more and more

(28:13):
of these AI businesses, you know, become up, they become invented.
It's more demands, more strain, more power resources, and the
grid that's already at its capacity may not be able
to handle where it's headed.

Speaker 1 (28:32):
So to that point, obviously, people like us that deal
in this type of work. You know, we've talked many
times about e MP issues as well as the Texas
grid being the main supplier of electric for the entire country.
Under Joe Biden, he released the contract for the chips
that go into that grid to come from China. Trump

(28:55):
had gotten rid of that. He's like, I only want
American made chips in the electric grid for the simple
fact that China is not a friend, They are a foe.
Who the hell knows what's in those chips, and now
they're in charge of the electric that feeds the nation.
I mean, have we all lost our mind? Right? So
Trump is now telling AI, we're not going to use

(29:17):
that grid. First of all, it cannot handle it. Second
of all, it would jack up the cost of the
electric costs for every average American citizen, many of whom
are not using AI right, or they're using it without
knowing it right. They're talking to SERI, they're using their iPhone.
They're not actively using AI like you and I aren.
But I read this post which I've shared with you,

(29:38):
and I'm going to share it with the audience now.
And President Trump yesterday had a roundtable with the very
people you're talking about, the CEOs from Amazon and Meta
and Microsoft and open Ai and Oracle and XA and
all these people that are sort of like the they're
the lifting and the starting spot for all things AI.
And they had the roundtable which one of the main

(30:00):
points that he was talking about was the ratepayer protection Pledge,
which he's saying Americans will not pay higher electric bills
do to power usage from AI data centers. Well, how
do we get there? Oh, I know, We'll make all
of these big CEOs figure out how to fund, create,
and provide their own utilities. Okay, so now they don't

(30:22):
only control AI all of these quote unquote as you're
saying AI agents, but they own all the power from
nuclear and they're their own utility. So they run all
the power and utility in the United States, because that's
going to be some massive utility. And he made a
great example. This guy's post as Ricardo T was his
name full disclosure. I have no idea who this dude is,

(30:45):
but he made an excellent analogy which says, by the
end of this year twenty twenty six, there will be
five US data centers which will each consume over one
gigawatt of continuous power. To give you perspective, one gigawatt
powers eight hundred and fifty thousand homes. So five of

(31:07):
these facilities will use more electricity than some entire countries,
which obviously the grid cannot handle. So we're talking about
now they're going to own all this power. They're going
to be buying nuclear reactors. We've got Meta signing twenty
year nuclear deals. Chevron's getting a two and a half
gigawat natural gas plant in West Texas. So what exactly

(31:32):
are the tech companies controlling. So they're controlling computer computering, data,
energy infrastructure. Like we're exactly talking about tech companies anymore.
These are becoming like superpowers, they're becoming their own I
hate to say it, but like their own countries. Yeah,

(31:53):
your thoughts, big setup.

Speaker 2 (31:56):
Right, it's another one. And I feel like this one
goes two ways because you know, most people they get
their energy bill, they see the total, right, maybe they
look at their kill a lot usage for the last month,
but they don't really go into the intricacies of how
that energy bill works. And it's actually somewhat complex from
the energy standpoint, because you have peak pricing, you have

(32:18):
time of use, You have different categories that a user
can be used or placed into based on their location,
their region obviously of the three grids. But then for example,
the let's take peak pricing certain hours of the day.
You know they recommend don't run your dishwasher or whatever
it might be, right, because that's when the general societal

(32:39):
demand is at its highest, so you get built at
a higher rate you use sometimes you I mean, that's
obviouslyp because's when everybody's active and doing things. And also
most likely when people are going to be using AI
as well, because they're up indirectly, right, So you think
about it, how it's like you said, it's it's going
to be in so people will be using it indirectly

(33:01):
without knowing it. So they'll be using it through their phones,
talking to you know, various agents on the other side
of a website. But that's all going to be additional
AI calls to the data center and back and forth.
So to that kind of point there is that now
will increase all of our energy bills because we're directly

(33:21):
and indirectly using AI, either deliberately or just by living life.
So as that energy consult goes up, the pricing goes
up because the demand goes up. The energy companies have
to sort of buy from third party providers, which goes
into that whole equation. So moving them into their own
independent system does separate that usage from what we use

(33:45):
as regular consumers. Okay, so we're no longer sort of
using them and then paying the energy companies to use
them as well, you know, in this sort of cyclical
loop of just energy demand and consumption. So right, it
does create the message of their own independent modes of

(34:06):
energy companies, data companies, information companies, knowledge companies, really knowledge companies.

Speaker 1 (34:12):
Well, it's also that idea, Dave, if you think about it, right,
we're talking about and it's a double on tone in
every way. It's a power company. It's company. It's a
power company. It has power over you, and it has
the power literally to source and feed its own demands.
So at what point does the nation start saying, hey,

(34:35):
we would like to lease power from you that you're
not using at xyz rate and then we're going to
send that back to the American public Because you probably
know better than me. How long have we been saying
the Texas grid is completely and totally overweighted. It cannot
withstand the amount that we are pushing, know on it,

(35:00):
like the demand is so much greater than its ability
to provide. Am I speaking out of turn? Or is
that accurate? No?

Speaker 2 (35:08):
I mean you see it sometimes in those peak summer days.

Speaker 1 (35:12):
Where blackouts, Sure, the brownouts in California and then the
blackouts and other places.

Speaker 2 (35:16):
Sure, but it is it is already here because I mean,
then you think about the other aspects. We have increased
ev usage, you know, more and more ED cars are
being bought, more of people are moving to the electric society.

Speaker 1 (35:29):
Rights, not just the ballots, which is still, by the way,
completely and totally supported by fossil fuels. But we'll leave
that there for the environmentalists.

Speaker 2 (35:37):
Yeah, exactly, Like, don't let the.

Speaker 1 (35:39):
Facts get in the way of your green efforts. Okay, please,
good luck.

Speaker 2 (35:45):
Back to the power plants.

Speaker 1 (35:46):
I just can't. I'm like, some things just are I'm like,
that's cool if you want to plug in, but by
the way, that plug doesn't work unless there's an energy
grid that is supported by fossil fuels to give you
that energy. But keep going. You're doing great, no car right,
good talk.

Speaker 2 (36:02):
Yeah, yeah, I mean, well, hey, if they control the
power and the knowledge, then they can convince you of anything.

Speaker 1 (36:06):
So that's right, and the message.

Speaker 2 (36:08):
To AI, right, And you know that's a good point
too about the power control, because you know, right now
we're kind of on the transition to AI as our
knowledge repository. For years, we had Google, where we'd go
and Google search something and whatever. One of the top hits,
top pages are what we could find and access. So
if you were a piece of information that was factual,

(36:29):
but you were not in Google's top two three pages,
you wouldn't be discoverable really because very people are clicking
that far into so the knowledge you only get shown
You either have to say, all right, do I take
this because it's being shown to me through Google. They're
not so many other alternatives, so I guess this must
be the truth, you know, if you're deductively questioning it.

(36:50):
Same thing with AI, same thing with power control, and
sort of the concern of where do we draw the
lines of autonomy what these companies can do, because if
they're you know, going back to the bias of their
not only independent energy wise, you know what they can
share with us knowledge wise, and then they become integrated

(37:12):
with all of our digital applications, we have no alternative
in some prices, and then we're sort of behold into
what they tell us or don't tell us, or what
truth they want to convey, or how they want to
subtly massage it. Like you were saying about the four
million dollars in palates. They'll kind of maybe they'll give
you the answer, but they'll position it in a way
that convinces you that, oh, you should think about it

(37:34):
this way as opposed to that way.

Speaker 1 (37:36):
Well, I'll give I'll give you a little insight into
how I research, which is really funny. Obviously, if you're
in your thirties or forties, you used Britannica, you used encyclopedias,
you had pencils, you know, you did things a little differently.
I personally think we need to go back to that
for a few reasons. One, we have nine year olds

(37:56):
wearing glasses because of screen time and the strain on
your right. And then they're playing video games. And you know,
my older kids are on Snapchat. I mean, and you
know where we used to like text each other and
we thought that was so cool, or we had instant
Messenger we thought that was so cool, or Google Chat.
These kids aren't even using words. They send pictures of

(38:18):
their face. If they don't want to talk, they send
pictures of the ceiling. I'm like, why are you sending
what is happening? Why are you sending a picture of
our ceiling? I don't understand. Oh, that's just slip them
and I don't want to talk. Okay, you could just
not answer them. That's also a good way. I've tried
that a couple of It's the old school way of
not answering. I'm not interested, Thanks for reaching out. I
mean just this idea of always being connected, always doing

(38:41):
a I always having all this integration. But when I search,
I go to the twelfth or the thirteenth page and
I work backwards because I want to see what's actually
what is the truth of the story, right, Because I
know the SEO, which for those of you listening is
search engine optimization, and you can pay for that will

(39:02):
put you in that top page or the very top
of the second page. Well probably, you know, if you're
paying to get to number one, I probably don't want
to know who you are. I want to know who's
on page four because they're the guy that's grinding it
out and there is some truth to that.

Speaker 3 (39:16):
You know.

Speaker 1 (39:17):
I'll get back to page one and two. That's not
a problem. But chat GPT, while it was first and
Opening Eye was first, I will say I think GROCK
has given it a run for its money, which really
speaks to the change, because I don't feel like anybody
ever beat Google, Like I like Safari, I like Brave,
I like Duck, dup Goo. I feel like all of

(39:37):
them are less biased than Google, but Google is still
the king.

Speaker 2 (39:44):
Yeah, yeah, I mean Google to be fair. They are
the Google Brain, the Google labs, all of their sort
of research. Are the scientists and academics who fraught us
the algorithms initially. I mean obviously a lot of people
who are involved. But you look at of the fundamational
fundamental papers and research work, a lot of it was
out of Google one hundred percent. Other companies sort of

(40:05):
took it and evolved in advanced and commercialized it. But
the fundamental kind of ideas of how do you mathematically
create an AI word generating thing was based out of
Google apps. So yeah, yeah, they are. They know they
know too much, they.

Speaker 1 (40:25):
Know too much. Yeah scares me. Now they're going to
know too much and they're going to be in charge
of how much we know about what they know. Like,
I'm out, I need I don't know, I need a
farm far away with dogs and other things, chickens. I'll
take anything for with four legs over this crap. But Dave,
you're the best. Thank you for joining Roague Recap today. Guys,
if you're listening, thank you for being with us. You

(40:46):
know what to do at Roague Recap at Lindamickrogue recap
dot com. We will be back tomorrow with more stacyfe
out there and pray for our troops.

Speaker 2 (40:53):
Night.

Speaker 1 (40:53):
Everybody,

The Sean Hannity Show News

Advertise With Us

Host

Sean Hannity

Sean Hannity

Popular Podcasts

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

The Girlfriends: Trust Me Babe

The Girlfriends: Trust Me Babe

When a group of women from all over the country realise they all dated the same prolific romance scammer they vow to bring him to justice. In this brand new season of global number 1 hit podcast, The Girlfriends, Anna Sinfield meets a group of funny, feisty, determined women who all had the misfortune of dating a mysterious man named Derek Alldred. Trust Me Babe is a story about the protective forces of gossip, gut instinct, and trusting your besties and the group of women who took matters into their own hands to take down a fraudster when no one else would listen. If you’re affected by any of the themes in this show, our charity partners NO MORE have available resources at https://www.nomore.org. To learn more about romance scams, and to access specialised support, visit https://fightcybercrime.org/ The Girlfriends: Trust Me Babe is produced by Novel for iHeartPodcasts. For more from Novel, visit https://novel.audio/. You can listen to new episodes of The Girlfriends: Trust Me Babe completely ad-free and 1 week early with an iHeart True Crime+ subscription, available exclusively on Apple Podcasts. Open your Apple Podcasts app, search for “iHeart True Crime+, and subscribe today!

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2026 iHeartMedia, Inc.

  • Help
  • Privacy Policy
  • Terms of Use
  • AdChoicesAd Choices