All Episodes

June 7, 2023 29 mins

Allum Bokhari is an investigative tech reporter at Breitbart News. In 2018, he stunned the media when he obtained and published "The Google Tape," a 1-hour recording of Google's top executives reacting to the 2016 Trump election and declaring their intention to make the populist movement a "blip" in history. He also obtained "The Good Censor," an internal Google document admitting to censorship, Facebook's list of so-called "hate agents," and YouTube's search blacklists. Allum talks to Tudor about all of this and Artificial Intelligence.  The Tudor Dixon Podcast is part of the Clay Travis and Buck Sexton Podcast Network - new episodes debut every Monday, Wednesday, & Friday.  For more information visit TudorDixonPodcast.com

Follow Clay & Buck on YouTube: https://www.youtube.com/c/clayandbuck

See omnystudio.com/listener for privacy information.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:00):
You're listening to the Tutor Dixon Podcast in the Clay
and Book podcast Network. Welcome to the Tutor Dixon Podcast.
I'm Tutor Dixon, and I'm so glad you are tuning
into our podcast today because I have one of my
favorite people to talk to you, because he's so much
smarter than me. He knows all of the stuff about tech.

(00:21):
His name is Alan Bacari. He's an investigative tech reporter
for breit Bart News. He spent his career exposing big
tech how they play with politics and seek to censor
dissenting opinions. Google, Facebook, YouTube, They're all censoring what we
do and who we see. And he knows this because
they actually censor breit Bart too. That was I think

(00:41):
one of the first conversations we had together, Alain, was
about how you can't really Google and get a Breitbart
article anymore. So you know all about this, don't you.

Speaker 2 (00:51):
Yes, that's right, they really censored bright button, he used
quite heavily. Hearing the twenty twenty election, there was a massive,
massive gam What we did is we compared the data
from the twenty sixteen election to the twenty twenty election,
and the visibility of Breitbart News links in Google search
results was reduced by over ninety ninety seven over ninety

(01:15):
seven percent. It was a massive, massive drop. And you know,
a drop like that doesn't really occur organically because we
hadn't lost a significant number of our readers or anything
like that. It was just the Google search results that
had seen that massive change. So it's obvious that this
was a result of Google changing their algorithm between the
twenty sixteen and twenty twenty elections. And that's kind of

(01:39):
what we what we should have expected, because you know,
we caught their CEO, their executive team in a leaked
video right after the twenty sixteen election discussing the changes
they were going to make to make sure that a
similar result didn't happen again. Even meant one of their
executives saying they wanted to make populism the populist movement
of lip in history.

Speaker 1 (02:00):
But that it's funny that we are talking about this
because just over the weekend, I had a friend who
is I would say, I guess I would call him
a conservative influencer, but also has his own organization, and
he was saying, I just it's not like it was
three years ago. He said, I can't get anywhere on Facebook.
I can't get anywhere on Twitter. Do you see that

(02:21):
this is happening to conservatives still, because I feel like
a lot of conservatives came out after Twitter with Elon
Musk and they said, oh, okay, so we're not crazy.
This is obviously happening, but it doesn't seem to have
moved companies like Facebook or Instagram or YouTube or any
of those.

Speaker 2 (02:36):
Yeah, it's important to remember that, you know, with the
exception of Twitter, you know, the big tech companies are
really still in the hands of you know, of the
mainstream of people who have bought into this idea that
they need to censor misinformation and disinformation and hate speech,
which of course are all euphemisms invented by the mainstream

(02:56):
media and academia to censor conservatives. Google and Facebook are
still very much doing that. And you know, as powerful
as Twitter is, you know, there's no bigger driver of
clicks to news publishers then on Facebook and Google. Those
are still the two really big tech platforms.

Speaker 3 (03:14):
So if progressives still have their.

Speaker 2 (03:16):
Clause in those companies through through the censorship teams there,
the trust and safety departments, then it's still going to
be very uphill struggle for conservatives to compete with the
mainstream media on those platforms.

Speaker 1 (03:29):
Well, speaking of progressives, I want to talk to you
about something that you wrote about recently because I feel
like this is kind of biting the hand that feeds you.
But maybe I'm wrong. Now they're going after the climate,
people are going after video games, which I need you
to connect that for me because I really don't get
that at all. But isn't this kind of like their

(03:49):
own team? You know, the video game people are the
tech people, So how does this work?

Speaker 3 (03:54):
I'm not sure it is their own team actually.

Speaker 2 (03:56):
I mean, you know, back in twenty four I remember,
back in twenty fourteen, one of the earliest stories I
covered at Bridbug was this thing called gamer Gate, which
was really like one of those early battles over wokeness.
We see wokeness in so many industries these days.

Speaker 1 (04:09):
You know.

Speaker 3 (04:09):
The recent one was the beer industry with bud Light.

Speaker 2 (04:12):
But back in twenty fourteen, there was a massive controversy
of a wokeness in video game journalism, and that turned
into a massive controversy lost it over a year, and
gamers were really really upset about you know, the intrusion
of political correctness into their their hobby which was previously
quite free of politics. You know, you gotta remember, you know,
video games tend to be quite you know, a male

(04:32):
dominated hobby, and you know, you know, male voters in
the US lease tend to lean Republican more than women
voters to to you know, if you look at the polls.

Speaker 3 (04:41):
So I kind of compared to the NFL.

Speaker 2 (04:44):
I mean, the a lot of you know, people who
were fans of the NFL, many of them were political moderates.

Speaker 3 (04:49):
They were they were.

Speaker 2 (04:50):
Apathetic, but then they became politicized with with the kneeling
controversies and Colin Campan, right, So you know when this
when when you see an inditry or how they become
politicized like this, you know, it's hard.

Speaker 3 (05:02):
To predict which way it will go.

Speaker 2 (05:06):
The story I covered for Brightbood is about the next
zero agenda. So you know, climate activists are upset that
the you know, the devices that gamers used with PCs
or consoles use a lot of energy, and they're trying
to change the industry so that, you know, to make
companies make games that are less demanding on on on

(05:29):
devices to reduce the the climate footprint, which is what
we've seen in you know, in almost every other field.
You know, how can we how can we change things?
How can we restrict.

Speaker 3 (05:40):
Things to to meet right climate?

Speaker 2 (05:42):
These next zero climate gilds that restricting the farming of cows,
the eating of meat, because you know, farming creates all
these carbon emissions. In France, they banned short haul flights
on commercial airlines, not product right, I.

Speaker 1 (05:56):
Just saw that. How how are they even going to do?
How will that go over in France?

Speaker 3 (06:02):
Yeah?

Speaker 2 (06:02):
I mean the French are already quite upset about Macrone's
carbon targets.

Speaker 3 (06:06):
Is net zero goals.

Speaker 2 (06:07):
This is one of the reasons why we've seen there
so many protests in big cities over the past two years.

Speaker 3 (06:13):
Of course, it's quite common in France with the bibig protes.

Speaker 2 (06:15):
In cities, but it's this is one of the cork
grievances cited by people like the Yellow vest protesters.

Speaker 3 (06:22):
In that country.

Speaker 2 (06:23):
Interestingly, you know, they banned short fall flights for average consumers,
but private jets are still allowed to go on those
same routes.

Speaker 1 (06:31):
Of course, so if you can drive within like three hours, right,
is that what it is?

Speaker 3 (06:37):
Yeah, it's about that much.

Speaker 2 (06:38):
Yes, So you can't take flights for those distances anymore.
You have to take the train, or you have to
you have to drive wealthy enough to quarter fly jet.
That seems to be the common theme with all of
these net zero agendas. Not trying to ban things outright,
they're just trying to put them out of the reach
of the average consumer, almost like they're trying to make
people comfortable with low standards of living, Like they're preparing

(07:01):
people for that.

Speaker 1 (07:02):
Oh, they are preparing people for that. I mean that
is that is that if we do not fight against this,
that's what we believe. The goal is to make sure
that there is no longer a middle class. You have
the ultra rich, and then everybody's at the same level.

Speaker 2 (07:19):
It's very convenient to persuade people that you have to
do it, you have to accept lower standards than previous generations,
because you need to do it to save the planet. Right,
that's a very persuasive argument for people who don't you known,
actually interrogate it and look at the evidence.

Speaker 1 (07:33):
It is because again, you have young parents who are
being told this world is not going to be around
for your children unless you do this, and that's I
think that's the ultimate way to twist someone's mind. Well,
I have to protect my kids, so I have to
do this.

Speaker 2 (07:48):
I find the best counter argument is to point out
that why are they so relentlessly determined on these specific
solutions that you have to give up your guests, you
have to give up meet, you have to be up
your gaming PC to save the planet, When if you
want to achieve next zero, there are ways to do
it without reducing the standard up living for everyone, the
most obvious example being a nuclear power.

Speaker 3 (08:11):
I think I mentioned this on Twitter.

Speaker 2 (08:12):
There was a recent example a nuclear plant in Finland
had to actually shut down briefly, had to pause operations
because it was making electricity so cheap it couldn't make
a profit. So you know, there are ways to reduce
the carbon footprint without arming consumers, but they seem relentlessly
focused on reducing the standard up living because that seems
to suggest as an agenda beyond simply reducing carbon emissions.

Speaker 1 (08:33):
Well, there's also a lot of money in coming up
with other solutions too, so I think that there's a
whole game going on behind the scenes here. But I
wanted to ask you about AI as well, because there's
a lot happening in the AI world right now. We've
seen videos coming out and people don't know if they're
real or if they're not real. We see that Japan

(08:56):
is saying we're going to extend this to even illegal
source is coming in because we want to make sure
that we can increase population or increase our ability to
work with our low population because they have not been
having as many kids as they were expecting. AI. The
whole story that we're getting around AI right now, it's
extraordinarily dangerous. You keep hearing these AI experts come out

(09:18):
and say we've got to stop it. But if other
countries aren't stopping it, they're even willing to go to
sources that aren't legal, where does the United States stand
if we are trying to prevent ourselves from expanding.

Speaker 3 (09:32):
Yeah, that's the interesting thing.

Speaker 2 (09:33):
It's important to unpack some of these claims that are
being made about the dangers of AI. They are all
legitimate dangers of AI, but I don't think they really
apply to too large language models, which is the specific
type of AI.

Speaker 3 (09:46):
We've seen explode recently.

Speaker 2 (09:48):
Shat GPT and programs like that that large language models
they're not artificial general intelligence, which is what people tend
to be when what the theorist you are talking about
in the saying, oh this is this could be a.

Speaker 3 (10:01):
Really dangerous thing if we let it get out of control.

Speaker 2 (10:03):
Large language models are very much controlled by humans and
are essentially just looking at patterns in human texts that
they read on the that they digest and turn into outputs.
I think the reason why, you know, if if I
had to read between the lines, I think a lot
of the companies have a vested interest in saying AI
is very dangerous because if they say AI is very dangerous,

(10:26):
then they can say, well, only responsible people should be
allowed to do it, you know, licensed companies or something
like that, and that gives them all the control, right
it It reduces competition, and like you said, other countries
with different priorities might not.

Speaker 3 (10:40):
Be putt in descriptions on AI.

Speaker 2 (10:42):
Japan doesn't want to put any rescriptions on AI because
they see there's the solution to their declining population, a
solution that doesn't involve mass immigration, which they really don't
want to do.

Speaker 3 (10:53):
And you know, there's obviously China as well. We've seen
this big push in Washington DC to reduce.

Speaker 2 (10:59):
To regulate the export of advanced computer chips to China,
and that's really trying to prevent the development of AI
technologies and other advanced technologies that could be used in
military systems. So yeah, there's definitely a big foreign policy
angle to consider here.

Speaker 1 (11:16):
I mean, can't it be used to manipulate people in
elections or any I mean even I could see a
country like China putting videos out and their people could
completely believe that all of this is happening in another
country like this. I don't know if you saw the
DeSantis office video where they make him into Michael Scott

(11:37):
from the office and he's wearing a woman's suit. I mean,
it is so it looks so real, it is shocking.

Speaker 3 (11:44):
This is a legitimate problem.

Speaker 2 (11:46):
I mean, it's really a shame that the Left has
spent five years just destroying the objective meaning of words
like misinformation and disinformation, because you know, there is real
misinformation and AI. It's an example of how it could
row really really, really really powerful. You know, imitating people's voices,
people's image in a way that's indistinguishable from the real thing.

(12:10):
That's where you actually do need some ways you know
to help people to tech misinformation disinformation. The problem is,
how is anyone going to trust a third party or
a company to identify misinformation for them to identify aid
bas when they spend so long using this as a
TWU look political partisan warfare that has no real objective.

Speaker 1 (12:30):
Meaning let's take a quick commercial break. We'll continue next.
On the Tutor Dixon Podcast. We've been talking about elections
quite a bit, obviously, coming off of twenty twenty two.
What are we doing wrong on the conservative side, Why
aren't we getting people elected? And what tech are we
not using, which I would argue we're really not using

(12:52):
any tech or very low levels of tech compared to
the other side that is going out and they're meeting
you where you are. I mean, they're going into your phones.
They're getting the message that you want to hear directly
to you. They're very good at that. So I had
someone the other day say, well, imagine at some point
it'll be your candidate. They will be able to go

(13:13):
into your phone, find out you had an appointment at
the vet that morning, and your candidate will then have
a video that comes up later and say I have
a busy life, and just like you. I had to
take my dog to the vet today and it'll all
be ai. I mean, do you see a world where
someday this happens and there's targeted elections like that, to

(13:34):
the point where people go, wow, Wow, I'm just like
that person. I want to vote for them.

Speaker 3 (13:38):
You know, I think it's already happened to some extent.

Speaker 2 (13:40):
I think it happened a long time ago, back back
in when was it twenty eight when the Cambridge analyticas scamp,
so this was like twenty eighteen, twenty seventeen. One of
the stories that the mainstream media ignored from that was
a former Obama campaign official coming out and saying, well, yes,
this Cambridge Analytica stuff is as bad as an invasion

(14:01):
of privacy, but actually Facebook gave us way more. That
gave the Obama campaign way more data voluntarily back in
twenty twelve as a sort of gift to the campaign
to help them win. And that was the entire social
graph that Facebook had, So you already had people inside Facebook,
you know, giving all of this day from an Americans

(14:21):
to the Obama campaign in twenty twelve. Yet the campaign
staff will come out and admit this in twenty eighteen
and nothing was done about it.

Speaker 3 (14:28):
Of course, it was brushed onto the carpet.

Speaker 2 (14:31):
So even you know, beyond the AIS, you have to
worry that people inside these Silicon Valley tech companies that
have so much targeted data on Americans just you know,
voluntarily handing that over to the Democrats because you know,
all of the so many people in Silicon Valley mean
heavily to the left.

Speaker 1 (14:47):
So this is where I think that people are not
understanding what how elections have changed and how important that
data actually is. So you have this on the left
or they have endless access to all of our information.
And that's how people market to you as well. So
they look at you and they say, okay, this person
drives to this school every day, they buy from this

(15:09):
grocery store, they buy guns, they don't buy guns. They
look at their systems, can look at every detail about
you and know exactly what's important to you and then
feed you that information to come and vote. So if
they know you're very pro life, they're never going to
hit you with the information on pro choice. They're going
to go to you with something else that you are

(15:31):
going to consume and love, and it's going to compel
you to go out and vote. But I would argue
that on the conservative side, we are completely behind on
this and have very few sources of this data. And
when we do have this data, there are consultants that
hold it hostage for high amounts of money. Do you
think that this will do? You think this is the

(15:53):
weight that is helping the left win in many of
these cases, even though their policies are not really that great.

Speaker 3 (16:00):
Think so on, you know, so.

Speaker 2 (16:01):
Wonder that Republicans win at all when the left has
this huge advantage in tech, Not to mention all the
mainstream media companies on their side, which are artificially boosted
through through Google and Facebook and YouTube. I think the
only the only problem the left has. No matter how
much data it has, no matter how much advantage the
tech companies give give to them, many of the left

(16:24):
wing agendas and policies just fundamentally unpopular. As no matter
how much you target people, you know, you're not going
to persuade a large majority of people to see you know,
biological men as as women. No matter how large majority
of for you know people you can you know target
through these tech platforms, You're not gonna persuade them to

(16:44):
stop seeing the crime on the streets, or the decline economy,
or the rights and prices you know at gas stations
and groceries, and you know, and and all the the
basic truth and basic facts that the left is trying
to cover up that's actually right there in front of
people's faces.

Speaker 3 (16:59):
No matter of tech advantage can get over that.

Speaker 2 (17:02):
But I certainly think that Republicans would be winning by
far more if it wasn't for these biased tech platforms,
if there were more controls on the favoritism that tech
companies have been shown in the democats with the past
a half decade.

Speaker 1 (17:19):
So, just selfishly, I will say to everyone listening again,
if you are investing in elections, ask people what they're
doing to win. I think this has been something that
you know. Just last week I met with someone and
they said, well, we put millions of dollars into this organization,
and I'm not sure what they did. It's the only
time I think people invest without saying, well, what's your plan.

(17:43):
I mean, you wouldn't invest in any other company without
saying what is your plan to get to the endgame
that you are looking for? But for some reason that
seems to be something that we are not doing on
the Republican side. So I appreciate the fact that you
brought up. And this is something that I also think
that you know, with Breitbart, I used to see Breitbart

(18:06):
as the news magazine or the online news to go
to to get the conservative view to understand how the
story was being twisted. But I think that the left
has been very smart about how they have stopped that
information from getting out. So how as a news organization,

(18:26):
as the media has changed so much, how do you
fight this? How do you fight the the ratcheting back
from online services.

Speaker 3 (18:35):
It's very difficult when we have all of the suppression
from Google. You know.

Speaker 2 (18:39):
The great advantage of the bright Bot has is, you know,
many of our readers don't come to us through Google
or Facebook. They come to us, you know, by you know,
manually typing in the U r L into the into
their brows, or having us as their bookmark, or subscribing
to UH to our to our email updates, to our
email newsleft. And that's that's a great way to get

(18:59):
around censorship. You know, if you're not seeing Brightbart in
your feed, then you should just sign up to the
email updates, you know, because if Facebook is suppressing the
new stories in.

Speaker 3 (19:11):
Your feed, or if Google have been shown to you.

Speaker 2 (19:13):
Shown them to you in search results, at least you'll
get it in your inbox when there's when there's an update,
you know, and I'm sure you know you have a
newsletter as well that the people can sign up to,
and that'll stop Facebook fro suppressing your posts or at
least help people get around that if it's happening. But yeah,
these are ultimately temporary fixes. I still maintain that we

(19:33):
ultimately need regulation to stop the political favoritism on the
part of Silicon Valley.

Speaker 1 (19:40):
Well, I am glad that you came on here today
because I think it's so important that people read your
work on Breitbart. You have recently talked a little bit
about the FDA approving neuralink. This is something that is
incredibly interesting to a lot of people out there that
struggle with paralysis, that struggle with any type of illness

(20:01):
that prevents them from seeing or speaking. There are there
are endless possibilities, we believe with neuralink, but it's also
kind of scary. So give us your take on what
this means to have an FDA approval for something like
this for human trial.

Speaker 2 (20:18):
Yeah, this is this is very interesting, you know, it's
it's a lot of people are quite scared by it
as well.

Speaker 3 (20:23):
It's a very creepy.

Speaker 2 (20:24):
Idea to have something inside you or you know, implanted
into your brain and affecting your your brain.

Speaker 3 (20:29):
Weves.

Speaker 2 (20:30):
But you know, there are, like you said, very you know,
important medical applications of this technology. So I wouldn't say,
you know, anyone should be entirely opposed to it. You know,
I just said fixing paralysis, you know, fixing eyesight, all
sorts of all sorts of potential applications for for brain implants.

Speaker 3 (20:48):
And now neuralink is.

Speaker 2 (20:49):
Able to conduct the FBAs that it can conduct trials
on using humans, you know, human volunteers, which is which
is very interesting. We'll see what happens with that. Obviously,
there are also lots of you know, sci fi horror,
sci fi horror scenarios you can imagine where you know,
neurlink messes with your perception and you know PTU VT

(21:09):
propagandaal misinformation or you know whatever else the same same
beers we see with the AI, except they're actually implanted
into your directly into your brain. But I think as
you know, as far as medical applications go, you know,
the technology can be quite exciting.

Speaker 1 (21:25):
I think we feel like this would be a huge
breakthrough for people who are suffering in it with paralysis
or things like that. You know, you look at that
and you go, wow, this is amazing. But there is
a big question could this fall into the wrong hands
and then I mean, you're right, I think of sci
fi movies where you have like a super army right
in this and then they're completely controlled, their minds are

(21:49):
taken over by whoever decides that they're going to control them.
Is that a possibility? Are we safe from that?

Speaker 3 (21:56):
Well? I mean that there's the technology has an advanced that. Yeah.
But there is a debate, you.

Speaker 2 (22:01):
Know, starting to emerge, a big potential divide between the
so called the transhumanists and some people actually openly identify
as transhumanists. How can we use technology to become more
than human, to overcome illnesses and mortality and you know,
you know, you know, make sort of cyborg superhumans. Some

(22:23):
people in Silicon Valley really are quite focused on that,
on that goal, they're obsessed by it. I don't know
if you've seen the story of going around recently about
the tech billionaire. He's trying, trying everything to live forever.
He's done a few media interviews about it. That's actually
quite a common preoccupation amost the very wealthy elite in
Silicon Valley. How can we use tech to help us,
to help us live forever. But then there's the anti

(22:45):
transhumanist side is that, you know, well, you say, well,
you know, actually it's we probably shouldn't merge with machines
and merge with machine intelligence because we'll we'll lose the
you know, the essential qualities of what it means to
be be human. I'm not sure where I stand on
the debate just yet, just because a lot of it
is quite hypothetical, and I think, you know, we should

(23:05):
see where the technology goes first before we start saying, well,
you know, ban neuralink or ban band brain impluence altogether.

Speaker 1 (23:12):
Well, so I did see this story just last night,
someone sent it to me of the billionaire who is
trying to return to an eighteen year old body and
then have that eighteen year old body forever. I mean,
and this is also something that we see in movies, right,
the person that never ages. And I was I think
that this guy is in his forties? Is that what
it is?

Speaker 3 (23:32):
I believe he's in his forties. It could be older
than that. Let me do a quick fact check. Yeah
forty four all right?

Speaker 1 (23:39):
Yeah, see this is kind of offensive to me because
I'm like, do we really I'm in my forties. Do
I really have to be worried about going back to
preserving my eighteen year old body right now? I mean,
give me a break. This is like, what are we?
How are we here? I mean, I'm not opposed to
it if somebody's going to offer it up to me,
but how does that even work?

Speaker 3 (23:59):
Yeah?

Speaker 2 (23:59):
It's trust me, It's like Silicon Valley Hubris, right, we
can we can do anything if we if we just
pour enough money into it, and you know, it's you know,
it's you know, think of stories like Creesus and you
know all these mythical stories about people who aim for
too much wealth or eternal youth and something always goes wrong,
right we kind.

Speaker 1 (24:18):
Of yes, happen, that's what that's exactly right. I'm like, Okay,
so what happens? And then if you are young forever?
I mean that's weird too. I don't I guess I
don't know that seems like, yeah, it's getting into a point.

Speaker 2 (24:32):
What do you lose on a human level if you
don't have to worry about aging and mortality anymore? Do
you just lose all motivation because you don't have that,
you know, time limit anymore?

Speaker 3 (24:43):
What's what's going to happen?

Speaker 1 (24:44):
I think that's the question that people have with all
of this stuff, with the AI, with that, with everything
that we've talked about, is what are the dangers of
messing with nature in this way? And how can you?
I mean we always you know, you see the movies,
we're like, oh, it's time travel, and they're like, if
you if you screw up something in that old in time,

(25:05):
you screw up the future forever. But I mean, you
sort of have to look at this stuff. And I
think that we've all seen enough movies to see the
AI and and neuralink and all of that as somewhat scary.
And I think that these stories that are coming out saying,
you know, Italy is banning this and Japan is going
to the opposite extreme. That's why I think there's a
lot of fear built around that right now. But I

(25:28):
guess you're right to a certain extent that these are.
I think that these are more advanced than we probably know,
but there are protections around it.

Speaker 3 (25:36):
Still, there are protections around it.

Speaker 2 (25:39):
One thing to be concerned about is what what the
kind of restrictions the left.

Speaker 3 (25:44):
Want to build into AI?

Speaker 2 (25:46):
You can you know, as you might predict, they're rather
silly and they're not what you know, most golden people prioritize.

Speaker 3 (25:53):
I did an article a few weeks back about what
the Biden FTC is looking at.

Speaker 2 (25:58):
They did a big announcement recently about how they're prioritizing
AI safety, But if you look into the research of
their sighting, you know it's all about, oh, we have
to stop AI from using a crime geolocation data, even
if it's accurate, because you know, even if it's accurate,
it's still unfair. So you know, the leftos can actually

(26:18):
stop AIS from using real data and actual fact because
you know, one of the reasons why I'm kind of
a little bit pro AI is because an unfiltered on
incumbent AI is simply looking at patterns and analyzing data
and coming to conclusions, and in many cases, you know,

(26:38):
the data actually favors reality.

Speaker 3 (26:41):
You know, in all cases the data favors reality.

Speaker 2 (26:43):
Right, So if you have factions in society that are
anti reality and opposed to reality and want to stop
the truth getting out, actually an unfiltered ai work would
work against them, and that's what the left is really
worried about.

Speaker 1 (26:54):
Oh wow, that's interesting. Well what about this recent story
where they had I guess it was a simulation where
the ai I had to shoot so many things and
then the human was preventing them from doing that, and
they ended up saying, Okay, if I have to be
on mission, I have to eliminate the human to stay
on mission. I mean that's also a concern is well,
does it one day attack us and it becomes smarter

(27:17):
and says they're stopping me, I'm going to take them out.
I mean, was that a true story?

Speaker 2 (27:22):
Yeah, well, I certainly wouldn't give the ai as the
nuclear codes just yet.

Speaker 3 (27:28):
But that actually was not That.

Speaker 2 (27:31):
Was a bit of a clickbait story that went around
the media because the.

Speaker 3 (27:35):
The US I thought the US Air.

Speaker 2 (27:37):
Force colonel who was talking about it, lady said, well,
he actually misspoke.

Speaker 3 (27:41):
This was just a.

Speaker 2 (27:42):
Hypothetical example he was talking about. There wasn't an actual
simulation where this happened. So, but I remember reading the
headline and social media first it was like, oh and
a I actually killed someone.

Speaker 3 (27:50):
Oh it was just a simulation. Oh it was just
a hypothetical.

Speaker 1 (27:52):
So you know, no, you have to be really. I
was reading. I read the headline and I looked at
it and I had to read it a second time
because killed human was in quotes that I'm like, Okay,
so they didn't really, it didn't really happen. So that's interesting.
That's just but see, that's where I think the fear
comes in when people have put these stories out there
and then immediately everybody's like, see we told you a Hey,

(28:16):
it's coming for you. We're gonna be owned by them.

Speaker 2 (28:19):
Yes, it's it's important not to dop to buy into
the hysteria completely, especially as I was saying earlier that
so many of the big companies actually want to fuel
the there because it means that only they will get
to use the technology.

Speaker 1 (28:31):
Wow. So interesting. Okay, So I love having you on.
I hope you'll come back. I love chatting about these things.
You're so knowledgeable. And for all of you out there listening,
go to Breitbart. They have great information on everything. But
definitely tech so Alan Bacari, thank you for being on
the podcast today.

Speaker 3 (28:49):
Thank you to you it would be back.

Speaker 1 (28:51):
Yes, and thank you all for joining us on the
Tutor Dixon Podcast. As always, for this episode and others,
go to Tutor dixonpodcast dot com. You can subscribe there,
or go to the iHeartRadio app, Apple Podcasts or wherever
you get your podcasts and join us next time on
the Tutor Dixon Podcast. Have an awesome day.

The Clay Travis and Buck Sexton Show News

Advertise With Us

Follow Us On

Hosts And Creators

Clay Travis

Clay Travis

Buck Sexton

Buck Sexton

Show Links

WebsiteNewsletter

Popular Podcasts

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Special Summer Offer: Exclusively on Apple Podcasts, try our Dateline Premium subscription completely free for one month! With Dateline Premium, you get every episode ad-free plus exclusive bonus content.

On Purpose with Jay Shetty

On Purpose with Jay Shetty

I’m Jay Shetty host of On Purpose the worlds #1 Mental Health podcast and I’m so grateful you found us. I started this podcast 5 years ago to invite you into conversations and workshops that are designed to help make you happier, healthier and more healed. I believe that when you (yes you) feel seen, heard and understood you’re able to deal with relationship struggles, work challenges and life’s ups and downs with more ease and grace. I interview experts, celebrities, thought leaders and athletes so that we can grow our mindset, build better habits and uncover a side of them we’ve never seen before. New episodes every Monday and Friday. Your support means the world to me and I don’t take it for granted — click the follow button and leave a review to help us spread the love with On Purpose. I can’t wait for you to listen to your first or 500th episode!

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.