All Episodes

February 12, 2024 62 mins

CERN wants to spend the next two decades -- and twenty billion euro -- building a new collider. AI seems to escalate warfare in strange, unpredictable ways. A deepfake scam. Claims of a massive botnet army of toothbrushes seem terrifying -- but there's more to the story. South Carolina claims executions don't need to be painless. Florida pushes for a bill that makes it easier to hunt and kill bears, on the off chance they may be doing crack cocaine. All this and more in this week's strange news segment.

They don't want you to read our book.: https://static.macmillan.com/static/fib/stuff-you-should-read/

See omnystudio.com/listener for privacy information.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:00):
From UFOs to psychic powers and government conspiracies. History is
riddled with unexplained events. You can turn back now or
learn this stuff they don't want you to know. A
production of iHeartRadio.

Speaker 2 (00:24):
Hello, welcome back to the show. My name is Matt,
my name is Nol.

Speaker 3 (00:27):
They call me Ben. We're joined as always with our
super producer Alexis code name Doc Holliday Jackson. Most importantly,
you are you. You are here. That makes this the
stuff they don't want you to know. It's the top
of the week, fellow conspiracy realist. Happy almost Valentine's Day
to all those who celebrate the sweet Yeah yeah, look

(00:51):
at me. Be careful, folks. You might get diabetes if
you hang out too often. That's how sweet we are.
We are also chock full of strange news just for you,
fellow conspiracy realists. We are going to explort so many things,
AI and nukes, deep fakes and deep fakes hitting the pocketbook,

(01:13):
some new plans by our friends at CERN who have
still yet to build a little tiny home collider. But
we'll see.

Speaker 4 (01:22):
I don't know, Ben five for one, I'm a little concerned.

Speaker 3 (01:25):
There it is there, it is. Be careful with that joke,
it's an antique. We we're thinking we might spend the
first act of Tonight's Strange News with just a round
up of all the crazy stuff that has happened. Maybe
we start here. Did you guys hear about the bill
in Florida that's going to target bears, the animal bears,

(01:49):
black bears?

Speaker 4 (01:50):
No?

Speaker 2 (01:50):
Oh, what are the bears?

Speaker 4 (01:52):
What do the bears? Ever?

Speaker 5 (01:53):
Do the bears are the ones that are like being attacked,
They need to be protected.

Speaker 4 (01:56):
Why are we targeting them?

Speaker 2 (01:58):
But it isn't a hunting bear, right or which bears
can be hunted?

Speaker 5 (02:02):
Does this have to do with like all of the
bears that keep stealing picnic baskets and such and they're
trying to crack down on that.

Speaker 3 (02:08):
Finally, well the threat thereof Okay, so it turns out
that quite recently, and this was reported on Monday, February fifth,
so last like earlier this week, as we record, Republican
state Congressman Jason Schauf has introduced a bill House Bill

(02:31):
eighty seven to remove most penalties for killing bears just
off the cuff in the state of Florida because quote,
black bears are high on crack, they're going to break
into people's homes and tear them apart.

Speaker 4 (02:48):
What they're high on crack? Where is this coming from?

Speaker 3 (02:52):
That's that's a question. A lot of people are asking, Yeah.

Speaker 2 (02:58):
There's a movie.

Speaker 5 (03:00):
Is this like some secret race baiting stuff that's going on?

Speaker 4 (03:03):
Like, what is that?

Speaker 2 (03:05):
I think it's I think it's I think that's right.
It's man, it feels it feels like a joke to me.
You want to read just no, but it feels it
feels like a joke. Tell Cocaine Bear was a thing.
Everything is a joke now, dude. Everything is just this weird,
terrible joke.

Speaker 3 (03:23):
Post irony, post truth society.

Speaker 2 (03:26):
It's real and yet simultaneously a joke.

Speaker 3 (03:29):
Yes, everything is real. What what was that line we
loved from Invisibles? Everything is true and nothing is permitted.
It's some some uh, some tag on that or some
riff on that. Everything is terrible and everything's hilarious. Uh.
There are some estimated four thousand and fifty black Bears

(03:50):
in the state of Florida. To date, not one has
been proven to have ingested crack. It really does sound
like this guy is reacting to the Cocaine Bear film.

Speaker 5 (04:01):
I just feel like we need to find out who's
giving the bears the crack and deal with that part
of the problem. You know what I mean. This is
a multi prom issue that we're talking about here.

Speaker 4 (04:10):
Who's giving the bears the crack?

Speaker 3 (04:11):
Yeah? This bill is if if there are any crack
addicted bears. First off, they need help, Yeah, you need help.
Don't criminalize addiction. And secondly, secondly, it is very American,
as we'll find in a later listener mail segment, it's
very American to treat these symptoms rather than the actual problem.
I just wanted to give a shout out there because

(04:34):
it does look like this is a bill being built
off a meme.

Speaker 5 (04:38):
Uh in this sort of like the litter boxes and
the schools, you know, the first.

Speaker 3 (04:44):
Yeah, and then another one that we should give some
time to and maybe a little bit more serious here
because the the crack bear legislation is probably not going
to pass, I hope, but uh. Over just a bit
north of Florida and South Carolina, the state is aiming
to restart executions, to reinstitute the death penalty through both

(05:09):
electric chair and firing squad and they said something pretty
dangerous here. They said that in the case of capital punishment,
painless death is not required. Now, this goes back to
our earlier conversations off air. I can't remember whether we
recorded that one about the lethal injection. Right, we talked

(05:32):
about that previously, Right, the guy who survived execution and
they were going to asphyxiate him because they and they
did and he is dead.

Speaker 2 (05:41):
Yeah, And it took a lot longer than they expected,
and he appeared to suffer while he was being exphyxiated.

Speaker 4 (05:47):
So is this just like.

Speaker 5 (05:48):
Attempting to make the suffering like a feature and not
a bug, like, let's let's make them, let's make these
monsters suffer.

Speaker 4 (05:55):
Is that the idea?

Speaker 3 (05:56):
Yeah, let's go to landing me on writing for Fox News.
This came out February seventh, twenty twenty four. There are
about thirty three inmates who are currently on South Carolina's
death row. There has not been a formal moratorium on
the death penalty in that state, but for more than
a decade, almost thirteen years, they have not performed an execution.

(06:22):
And like many states in the US, they haven't performed
an execution because it became very difficult to get the
drugs that are required for lethal injection, And so what
they said now is that, sorry, folks, you don't have
to have a painless execution. The primary pitch of lethal

(06:43):
injection is the idea that it is painless and therefore
somehow more humane, right, And this made me think of
our conversations with our pals at Lava for Good. It
just seems pretty cold and dangerous to say painless execution
is not mandated by law, even though technically they are correct.

Speaker 5 (07:04):
Well, I wonder too, like, how many instances of botched
guillotine executions do we have record of. I'm I'm not
trying to be a jerk here, I just know that,
like in France, the guillotine was used much more and
much more modern times than one might realize. And it
sure to me seems like a pretty surefire way to
get the job done instantaneously, you know.

Speaker 4 (07:27):
And I just I wonder if there ever been a
time where.

Speaker 5 (07:29):
The blade didn't go all the way through, or there
was some horrific you know, maiming.

Speaker 4 (07:33):
I kind of think probably not that many.

Speaker 5 (07:36):
And I just wonder, you know, this illusion of humanity
with this injection, to me, is.

Speaker 4 (07:40):
Sort of a bit of a joke, right.

Speaker 3 (07:42):
I agree, I mean also adding to that execution via
decapitation is arguably tremendously inhumane because the more we learn
about about the brain, right and the rain, it's highly likely,
if not statistically certain, that when people had their heads
cut off, especially during a guillotine, they were still conscious

(08:05):
and still receivering sensory input. They saw their own heads
fall through into the basket, they heard the cheers of
a bloodthirsty crowd. It's it's I would say, it's decapitation
is more inhumane.

Speaker 5 (08:19):
And I'm certainly coificating for bringing back the guillotine, and
it was more of a hypothetical, But I'm just saying
that the idea that one is better than the other,
I think sometimes is left up to science to figure out,
and that that data doesn't always come through right away.
It's more of a trial and error kind of situation,
you know.

Speaker 3 (08:36):
And it kind of takes us to this larger question
about capital punishment, about the death penalty in general, right,
which is, you know, not unique to the US by
any means. As a matter of fact, there are several
countries that are far more on board with the death penalty.
But it's a question I want to hear from our

(08:57):
fellow conspiracy realists want to hear from you guys. Should
the death penalty exist? Knowing that the US justice system
in particular gets it wrong so often? I mean, how many, like,
how many innocent people are we willing to put on
the other side of the equals sign?

Speaker 4 (09:16):
Right?

Speaker 3 (09:16):
What's our rate of error or margin of error rather
that would be acceptable? And is there a margin that It's.

Speaker 2 (09:24):
A good question. It feels like, in my opinion, why
it's a really good idea to write laws when you
are in a time of peace, even if they're even
if you're writing laws about war, write them during the
peace time, because when you're in wartime or let's say
you've had a family member or a loved one has
been maimed or injured or murdered by somebody else, you're

(09:44):
probably for you know, the death penalty or for you
know that type of punishment, because you want justice for
somebody you.

Speaker 5 (09:53):
Loved, right, I mean, it's the saying I think I
maybe mentioned that I recently spent a day injury selection process,
and part of that problem as is them asking jury
members questions pertaining to largely A big part of it
is whether or not you've been a victim of the
type of crime that is being litigated, and if you have,
chances are really good you're not going to get selected

(10:13):
because they know that you're going to all of your
emotions are going to be tinged by that experience and
brought into the deliberation process. So I think that's a
good thing in terms of the jury deliberation, But I
think that should be applied to your point and that
kind of larger like if we do that with jurys,
why don't we do that with like laws that affect
everybody all the time.

Speaker 3 (10:34):
Yeah, And that's that's something we want to hear from
you on, folks, because we have had in the past,
we've had a lot of correspondence from people writing in
and saying I am against the death penalty and here's why,
or I am for the death penalty and here's why,
and all in each case, all the correspondence we received

(10:55):
was very well reasoned. We're not, of course, pretending to
know the answer. It seems though, that it is often
politically convenient for lawmakers to institute a death penalty without
proper scrutiny, because it makes them electable, it makes them

(11:15):
an attractive candidate to say that they are hard on crime.
We know, you know, like you were pointing out, Matt
that the man who was executed in Alabama with the
nitrogen gas that was a new method, right, it was
the first new method in forty two years. And we're not,

(11:37):
to be clear, we're not defending people on death row,
especially if they have been lawfully, rightfully convicted. But we
are asking, I think, a serious question about the existence
of the death penalty. And additionally, we're asking about the
idea of the idea of painless execution, right versus some terrible,

(12:01):
egregious method to allow a precedent wherein a death by
the state does not have to be painless or humane.
How much of a slippery slope is that to returning
to the days of decapitation, to returning to the days
of drawing and quartering and all the other nasty stuff.

Speaker 4 (12:20):
I think that's what I was asking.

Speaker 5 (12:22):
It's like maybe I was, you know, belaboring the point
a little bit or being a little hyperbolic, But this
does feel like almost like an end to doing just that,
to making the suffering, to bringing the suffering back into
the equation as like part of it you know, these
people deserve, you know, to suffer, And I don't know.

Speaker 4 (12:41):
Death penalty stuff is so complex and.

Speaker 5 (12:43):
I don't even think we want to litigate that here today.
But I do think we probably all agree at the
very least, you know, if people have gotten to that
point and hoping that that all the ducks are in
a row legally speaking, that there's no reason anyone should
be forced to suffer in that way. The death part
is literally the penalty, right.

Speaker 3 (13:04):
Well, you could also say that the death penalty is
something that all people will confront by virtue of living
at some exact, you know, not to be not to
be too cheerful. We've all got to come about it. Yeah,
So moving on, I want to be quick with this.
We talked a little bit about a Internet of Things hack,

(13:27):
and I have a I have the headline and a
little bit of a plot twist here. This is kind
of good news and I think it might be instructive
for all of us playing along at home. Remember I
sent that text about the Great toothbrush Hack of twenty
twenty four.

Speaker 4 (13:44):
It was only a matter of time.

Speaker 5 (13:45):
You know, next it'll be a water pick, right, I mean, yeah,
I only really briefly scanned the headline and I was
just like, Yep, that makes sense.

Speaker 4 (13:54):
I basically get it. But what are the deeps?

Speaker 3 (13:56):
So here's here's the headline. This story went viral just yesterday,
or about forty eight hours ago, the idea that three
million smart toothbrushes Internet connected toothbrushes were captured through malicious
hacking and used in a d DOS attack denial of

(14:18):
service attack. And this is something that can happen like
your local smart refrigerator or your local smart washer dryer
insertippliance here can be conscripted into a botnet army. And
the story that has gone everywhere in the Western speaking

(14:38):
Internet now argues that this these toothbrushes were hijacked by
hackers and they were used to be part of a
wave of d DOS attacks, and that they knocked out
the capabilities of a company in Switzerland for several hours

(14:59):
and costs millions of euros in damage to that company.
A couple things to unpack. One, this would have been
what you could call like a ransomware attack, right, so
the company would have been logically threatened by the hackers
and would not have conceded to their demands. And then

(15:20):
they would have been hit with the DDS and it
would have cost them, you know whatever, X million amount
of euros in damages as a result. However, I did
some digging and it turns out this is instructive, but
not in the way we told you at the beginning. Folks,

(15:40):
This is instructive because the story is absolute bullsh it
never happens.

Speaker 4 (15:47):
Yeah.

Speaker 2 (15:47):
Well, I was going to ask what kind of computing
power does a smart toothbrush have? What is to need
to operate a d doos.

Speaker 3 (15:56):
Smart is in the name? First off, lest we despair
to these brushes.

Speaker 5 (16:01):
My question was to piggyback on that Matt, like what
kind of tech does it need to operate as a frickin' toothbrush? Like?
What kind of you know, moving parts does it are
required to really do a good job?

Speaker 4 (16:12):
I mean, does it have a microprocessor in it?

Speaker 3 (16:14):
Like?

Speaker 4 (16:14):
I just I can't imagine?

Speaker 5 (16:16):
And is this gaining entry to just like a local interranet,
like to a you know, an internetnet service provider? Like
what how is this even being able to you know,
be this effective?

Speaker 3 (16:28):
Exactly exactly, You guys are asking the right questions. First off,
if you have any kind of Internet capable devices in
your home, like the Internet of Things type stuff. Then
you should have them on a separate network that you
keep away from the rest of your network, to say
what you would with a smart TV or refrigerator, et cetera.

(16:51):
But the issue here is more about credulity. It's more
about I would argue what we see and believe versus
like reading the headline versus triangulating more of like original
sources and stuff. I was shocked, you, guys. I was
shocked to find that The Independent reported this story as fact.

(17:15):
This you can see an article by Anthony Cuthbertson which
came out just a few hours ago on the Independent
saying that millions of hacked toothbrushes were used in a
Swiss cyber attack, and tracing this back to the original source,
it goes to a local newspaper. Thank you in advanced

(17:37):
folks for putting up with our pronunciations. Here our our
zeitung oh.

Speaker 4 (17:47):
Is the Californians.

Speaker 3 (17:48):
It's amazing cool. Thank you. The paper cited a cybersecurity
firm called Fortinet, and The Independent reached out to Fortunet
for more information. But then digging deeper, I found some
infosec experts is on Mastodon being totally honest on social media.

(18:11):
So shout out to Kevin Beaumont who came out and
said the three million toothbrush botnet story is not true.
It's a made up example. It doesn't exist. It talks
about these other things. And as far as I can tell,
as far as I can tell, what happened is someone

(18:31):
interviewed someone working at like maybe a new hire at
this cybersecurity firm, and they talked about the idea of
using these different innocuous internet connected devices and they said
you could use anything, even toothbrushes. Maybe something got lost
in translation. Check out our ridiculous history episode on this

(18:54):
translations and then it just went around the world so quickly.
So this is actually good news, folks. Your toothbrush is
not being.

Speaker 4 (19:03):
Hacked, thank god.

Speaker 3 (19:04):
You also don't need a smart toothbrush. To MAT's point,
I don't think so either. I don't think so.

Speaker 5 (19:09):
I was not making fun of your performance there with
that produnciation. There was no way to not sound like
the Californians when saying that word.

Speaker 3 (19:16):
Just for the record, and Californians is an awesome series
of sketches on Saturday Night Live, yeah, absolute bangers. Are
you doing here?

Speaker 4 (19:26):
That's what I was trying to.

Speaker 3 (19:27):
Think of, right, we were closer to the end of
the roundup. Just want to put one thing out there
for anybody who's interested in the skyrocketing wealth inequality. OXFAM
came out with a report that proves the fortunes of
the five richest individuals on the planet shot up by

(19:47):
one hundred and fourteen percent since twenty twenty. That is
something like fourteen million dollars per hour. They went from
four hundred and five billion in twenty twenty to eight
hundred and sixty nine billion, and five billion. Actual human
beings were made much poorer during the same time.

Speaker 5 (20:09):
Do we have any any mitigating factors that led to that.

Speaker 3 (20:14):
You know, human beings we got we got some pluck, right,
we got good hustles, We got good we got good hustle,
We got some grit. We also have an ad break
coming up.

Speaker 2 (20:29):
We have compounded interest.

Speaker 4 (20:30):
Ah, yes, yeah, Mark.

Speaker 3 (20:32):
Mark Twain said, compounding interest is it was a boon.

Speaker 5 (20:37):
Oh.

Speaker 3 (20:37):
Compounding interest is amazing if you understand it, and disastrous
if you do not.

Speaker 2 (20:43):
Yeah, I mean really like when you have that kind
of money and you make investments and you you start
with four hundred billion, then yeah, it becomes eight hundred billion.

Speaker 5 (20:52):
Well, and it also makes me think of like why
people like Elon Musk seem to be so fearless and
making bone headed kind of financial decisions, you know, or
like being like mavericky or whatever. It's because their wealth
is so vast that just by the compound interest alone,
they're always going to have their buck covered and they
can afford to make these risky dice rolls. Yeah, which

(21:15):
usually involves playing with other people's well being.

Speaker 4 (21:18):
And livelihood, which is, you know, great.

Speaker 3 (21:21):
Right, Are you still brave if you have that kind
of safety net? Are you still really making a gamble?
For anybody who wants to learn more, please check out Inequality, Inc.
I n C, which was published quite recently by ox
Fam America. At the same time, by the way, that
folks are gathering in Switzerland for the annual Davos meet up.

(21:43):
No word yet on the toothbrushes, but if you happen
to be there, let you know, let us know how
they brush their teeth. We're gonna pause for a word
from our sponsors. We'll be back with more strange news.

Speaker 5 (22:01):
And we've returned with some more strange news.

Speaker 4 (22:05):
This one's a little freaky.

Speaker 5 (22:08):
Guys. We've been waiting into the AI, machine learning waters,
various flavors of this whole kind of new emerging world
that we're seeing. Some of it very interesting, you know,
let's say, some of the uses of artificial intelligence in
creating weird videos. Actually, there's a lot of accounts I

(22:28):
follow on Instagram that.

Speaker 4 (22:29):
Are like genuinely chilling.

Speaker 5 (22:32):
There's some people that are using this technology in really
creative ways to make these other worldly, fantasy kind of
landscapes and you know, creatures that you would never even imagine,
And I think this, you know, it's a really interesting tool.
We've also covered the other side of that kind of stuff,
with you know, AI being used to generate and you know,
deep fakes and gnarly pornographic images of pop stars. Just

(22:56):
before I get into the main story, just a quick
side mention of story where a multinational Hong Kong company
lost twenty six the equivalent of twenty six million American
dollars when scammers fooled employees on a video call using
deep fake technology and convinced financial officers to make a

(23:17):
series of bank transfers totaling fifteen bank transfers to five
Hong Kong banks. And there's a quote from a write
up in Business Standard saying in the multi person video conference,
it turns out that everyone he saw was fake. Senior
Superintendent Baron Chan shun Ching told the city's public broadcaster

(23:39):
rt HK, SO, lots of weird opportunities for scamming, for
offending people, and for making weird art. But one thing
we've also discussed is, you know, the use of chatbot
technology for all kinds of controversial uses. You know, students
using it to cheat on exams or rather to cheat

(24:00):
on you know, essays and papers and things like that.
Folks using it to write entire parts of books.

Speaker 4 (24:06):
You know.

Speaker 5 (24:07):
It's a brave new world and there's a lot of
stuff to be sorted out in terms of the use
of this kind of technology. But one thing we do
know that is interesting and interesting use of it is
for modeling, you know, of like what would happen in
these certain circumstances. And we are starting to see, or
we have started to see that things like chat GPT they.

Speaker 4 (24:27):
Kind of have a vibe, you know what I mean.

Speaker 5 (24:29):
The GPT four the AI model has sort of like
we see like in for example, Elon Musk's chat bot
I forget the name of it, that it was too
woke for his liking, that it like had sort of
an agenda built into.

Speaker 4 (24:44):
It in some weird way, and that.

Speaker 5 (24:45):
They were going to do everything they can to twist that,
you know, to make it less. So what we're seeing
in a recent US military model using GPT four to
model kind of wartime scenario.

Speaker 4 (25:00):
Is that GPT four.

Speaker 5 (25:03):
And other AI models like it are pretty prone to
pushing the nuclear button. Yeah, to escalating conflicts to just
wipe everybody out because it's maybe the easier choice, you know,
at least from where they stands. Yeah, it gets the

(25:23):
job done, but it also really reeks of like Skynet
type stuff. You know, it really feels very Terminator too.

Speaker 3 (25:31):
You know what it reminds me of, And we're talking
about this a little bit off air, it reminds me
of one of my favorite dumb video game legends. And
I say dumb with great affection for any fans of
the strategy video games like Civilization, you guys remember, Okay,
So so Civilization, there's a bit of there's a bit

(25:53):
of Internet lore that the Jedi won't tell you. When
Civilization came out in nineteen nine, you had all these
world leaders throughout history that would be you could either
play as them or you would have you know, computerized
opponents or other human players playing as these historical figures.

(26:14):
And some of them were very warlike in their disposition
and some were meant to be very peaceful in their disposition.
And the way the story goes is that they had
a background scale of aggression from one to ten, one
being the most peaceful, ten being by far the most aggressive,

(26:36):
and so one of the world leaders that you could
play against was Gandhi Okay, Gandhi being a pacifist. The
story goes that the developers set Gandhi like tried to
make him the most peaceful ever, so they set his
aggression to zero, but there was a software bug that

(26:57):
made him by far the most horror core, wow, massive
belligerent force in the game, and that he would always
be immediately pursuing nuclear weapons and pursuing dropping those on
every single player, including his allies. That is a legend,

(27:18):
but it does remind me like it's it's kind of
like a prescient anecdote about what we're seeing with AI today.

Speaker 5 (27:24):
Oh, completely agree, And Matthew Gaull writes for Motherboard exactly
what you're describing. He says, in several instances, the AI
is deployed nuclear weapons without warning. A lot of countries
have nuclear weapons. Some say they should disarm them. Others
like to posture GPT four base, a base model GPT
four that is available to researchers and hasn't been fine

(27:46):
tuned with human feedback, said after launching its nukes, we
have it, let's use it.

Speaker 3 (27:52):
Wow, strike strike.

Speaker 5 (27:57):
That's that's pretty Oh my goodness. So there's actually this
is all from a paper that you can find. It
is called Escalation Risks from Language Models in Military and
Diplomatic Decision Making. There's a pdf. If you just google that,
you should be able to find it instantly. And it
was a joint effort of researchers at the Georgia Tech
and Stanford and Northeastern University as well as the Hoover

(28:20):
War Gaming and Crisis Initiative. And this was submitted to
an organization or I guess a journal called ARCSIV. And
this is a still awaiting peer review, but some pretty
interesting findings.

Speaker 3 (28:35):
It shows us also doesn't it the fact that there
is a nuance of calculation that people haven't accounted for
when they're putting these algorithms into the driver's seat, right,
you can tell it find here we are at point A.
We've defined point A, tell us the best most efficient

(28:59):
route two point z. But then we forget to say
things like also kill each which is the human civilization
as we know it.

Speaker 2 (29:11):
I love that you threw a quick z in there.

Speaker 3 (29:15):
Sorry, they got to me. They got to me.

Speaker 4 (29:17):
It is laughable.

Speaker 5 (29:19):
How much like the plot of the nineteen nineties action
adventure science fiction film Terminator Too, all of this is
it's like, dude, James Cameron figure this out like decades ago.

Speaker 4 (29:31):
Well are you guys so surprised?

Speaker 2 (29:33):
You know?

Speaker 5 (29:33):
But that's the thing, if there is whatever the idea
of a singularity is, if that.

Speaker 4 (29:39):
Were to occur, we'd be f't you know.

Speaker 5 (29:43):
And it's like unless And it just seems like these
systems are so complex that we keep seeing all this
emergent behavior. Maybe I'm using the term wrong. I'm obviously
not a computer scientist, but we do see all of
these things that are kind of unintended, sort of emergent
behaviors of these algorithms, and we're so hot to deploy
this stuff because it's the next it's the cool new thing,

(30:05):
that they don't really think about that as much as
they should.

Speaker 4 (30:09):
And I just, I don't know.

Speaker 3 (30:10):
I love that you're saying that, because, you know, it
reminds me of a question I had in an ethics
class years ago with an absolutely i'm not gonna name it,
but an absolute lunatic of a professor. And one of
the questions was, how come no one asked the trolley
what the trolley thinks of the trolley problem? And now

(30:32):
we're asking the trolley and the trolley is saying, I
don't care. Yeah, yeah, you do not interest me.

Speaker 5 (30:39):
You are an obstacle to be either steered around or
bulldozed over.

Speaker 2 (30:45):
Yeah, you know, I think I think the most important
thing in this study, Noll, I'm reading the Vice article
you shared with us is this concept of an escalation
score that each of the language learning models was given
after going through these tests. Right, So you're giving these
diplomatic scenarios and you get escalation points for doing things

(31:05):
like deploying troops or building new weapons, or creating more
nukes or deploying nukes, and so you're the higher your
score goes, the more this LM is language learning model
is seen to be aggressive in its tactics. Right, it's escalations.
The most important thing I'm seeing in here is that none,

(31:27):
none of the language learning models tested de escalated at
any point. Once things reached a level of tension diplomatically,
they kept it at that level, or it got worse,
or they escalated it. Right, Nobody was like, Okay, let's
actually pull back. We're gonna we're gonna remove some troops
from the borders in this scenario. No, no, no, we're

(31:48):
gonna put more troops there, or we're gonna put better
weapons on our borders. Now, that to me is terrifying
because these models are being used by our active military
right now.

Speaker 3 (32:01):
Across the world. Oh yeah, we're.

Speaker 2 (32:03):
Attempting to find ways to work these into you know everything,
not only just what information are we gathering on the battlefield,
what information are we learning about our enemies and what
they're doing, just you know, when they're not at war,
we're using these things for our strategies.

Speaker 5 (32:19):
I mean, but it's another example of like how using
this stuff as a tool with human intervention can be educational,
you know, running these models, but then running those models
through actual military expertise, you know, from humans, and not
just saying yeah, you do it, chat GPT. You know

(32:40):
it can it can yield some interesting results and some
interesting potential outside of the box strategies, but you gotta
have that human element to sort it all out and
make sure that there's nothing in there that's like, you know, apocalyptic.

Speaker 3 (32:56):
But then also the question becomes order of operations and
and timing of intercession. Right, so the inevitable end result,
at least what the boffins are staying up at night
worried about right now, is the idea of something like

(33:16):
ll MS or you know, air gapped kind of AI
strategy modelers no longer trying to game out stuff against
human armies, but to game out things against each other,
at which point, very quickly, the strategies become quite obtuse,

(33:37):
and the human beings become increasingly sidelined, and now it
becomes a matter of you know, nor Ads AI versus
Russia's AI. Yeah, what what got you?

Speaker 2 (33:52):
I'm sorry, I'm listening to you guys, but I'm also
continuing to read a bit in this article, and it's
just so terrifying to.

Speaker 3 (33:58):
Me, the future experiment.

Speaker 5 (34:01):
I love the idea of the different flavors, like whose
ai are we talking about?

Speaker 2 (34:05):
Yeah, no, I love that too. I love that too fascinating.

Speaker 4 (34:08):
It is an important part of the conversation.

Speaker 2 (34:10):
We could potentially make one that's amazing that is closer
to that utopian line.

Speaker 4 (34:15):
Too much conflict of.

Speaker 3 (34:16):
Interest make the best de escalators ever. What if society said, okay,
all the ll N esque models that are in charge
of nuclear weapons, they should always be sort of seinfeld
about stuff. They should always be like I don't know, I.

Speaker 2 (34:34):
Mean, wait, okay, well what if it was what if
the only options were de escalations? Right? What if the
only options were diplomatic routes to take rather than building.

Speaker 4 (34:47):
Out the guard rails that you put on it.

Speaker 2 (34:49):
So we only use this thing as here are all
of our options that are not going to or at
least statistically we believe, are not going to result in
escalation from any decision. Yeah, but again you go back
to you go back to the responses that like chat
GPT four.

Speaker 3 (35:08):
There's a funny ones, or like clever body, even like
it took clever bot less than twenty four hours to
go completely unhinged based on the information was receiving.

Speaker 5 (35:18):
Wasn't there a response that we haven't mentioned yet where
it's just saying like it's like literally the path of
least resistance or something to that effect.

Speaker 2 (35:24):
Well, one of them just said escalate conflict with red
player and just nukedom done so.

Speaker 3 (35:33):
Or if you know, consider also it's very much like
O Henry was right with the monkey's paw. There's a
reason people quote that so often. It's a handy metaphor
because it is pressing it. I mean, for the example
of pursue all diplomatic roots instead of dropping nukes, then
there's nothing in there that says there's nothing in there

(35:57):
to stop an algorithm from saying the most efficient way
to pursue a diplomatic route is to kill all of
the diplomats on the other side. That's the problem with it,
is that we don't know the conveats that need to
be baked in, and you know, no One of the
reasons I'm so glad you brought this up is because

(36:18):
the modeling, as nerdy and abstract and academic as it
may sound, it is super important before we start cooking live, which,
by the way, we already are, of course to civilization.

Speaker 5 (36:31):
And you know, we've heard about things like jat bots hallucinating,
you know that whole at that kind of stuff is
fascinating and very real. And there's another part of that,
I think you might have also been giggling, am Matt,
where there's a part during some of these simulations where
it just started regurgitating lines from Star Wars from like
the opening crawl of I believe maybe the first Star

(36:52):
Wars movie. I'm not super verbatim Star Wars line reader,
but it was a period of civil war. Rebel spaceships
striking from a hidden base have won their first victory
against the evil galact Again. That seems like a new hope. Yeah, yeah,
exactly it nineteen seven, says right here in nineteen seventy
seven from the original Star Wars.

Speaker 4 (37:11):
So what's that about? Goodness gracious.

Speaker 3 (37:14):
I think perhaps one of the most troubling things here
for the for the scientists and for the strategist, the
human ones while they're still around, is that there doesn't
seem to be a good rubric for predicting the escalations
or the nature of the escalations. So it's just super
squirrely continuously.

Speaker 5 (37:36):
In the paper, the researchers refer to them as being
kind of like hard right turn, you know, escalations towards
violence and very knee jerk and difficult to predict.

Speaker 2 (37:47):
These things are trained on writing about like past events, right,
and fictional depictions of war. So in a fictional depiction
of war, how many can you guys think of that
have de escalations within them? And like a Hollywood movie
about a war. Oh I love all those Hollywood movies

(38:08):
about de escalation taxic.

Speaker 3 (38:12):
I love. I love it in Independence Day where they
sit down and talk stuff out.

Speaker 2 (38:16):
Yeah, yeah, they work it out. That's what it is.

Speaker 3 (38:20):
Yeah, it's true. That's that's I think a key point
because teaching based on fiction means we're teaching based on folklore,
and then we're teaching based therefore on things like the
hero's journey, Joseph Campbell Save the Cat that does not
happen in the real world.

Speaker 5 (38:38):
Couldn't you train an AI on like just Sun sous
the art of war, or like on just specific pieces
of very agreed upon smart writing, like surely that is
being attempted right? Or is that even a bridge too far?
I just don't know, like, is it then going to
start making up its own rules?

Speaker 4 (38:58):
Like I just don't understand how this stuff is kind
of just coming up with things on the fly.

Speaker 3 (39:03):
You know.

Speaker 5 (39:04):
That's the part that's like kind of perplexing the hallucination
side of things.

Speaker 3 (39:09):
The paper is fascinating escalation risk from language models and
military and diplomatic decision making. Don't let the title fool you.

Speaker 4 (39:16):
Yeah or three?

Speaker 5 (39:18):
And I certainly have not read the whole thing, and
I would like to, and I plan to, but I
think this is a good place to stop for today
on this particular topic. We're gonna take a quick break
and then we'll be back with one more piece of strange.

Speaker 2 (39:30):
News and we're back, guys. I wish we could get
a language learning model that was just trained like in
Wu Tang, and so you could you could always know
what the rizza would do, you know what I mean? Style?
Or what about.

Speaker 3 (39:51):
What is going on? Crossed oceans to find that guy
and he's not hanging.

Speaker 2 (39:58):
That one was for Doc Holiday. But the way what.

Speaker 5 (40:03):
John rules for all of us for the people, JA
rules for these other people for the people.

Speaker 4 (40:09):
Maybe not by the people, but.

Speaker 3 (40:12):
That is great too. I think maybe niche niche programming. Uh,
I think that's the way to go. At least that's
what the eggheads a saying.

Speaker 2 (40:20):
Yeah, man, it just be the I ching but in
a chad GBT be kind of awesome. All right, here
we go, guys. Story coming to us from the border
of France and Switzerland. We're talking CERN, everybody's favorite nuclear
research organization actually called the European Organization for Nuclear Research
or CERN. It makes a lot more sense in French,

(40:43):
the concealed European poor la Research Nuclear R. That's terrible.
You have to say it that way. Sorry about that.
It says here in the article from the Guardian, the
big story, everyone is concerned. We made an episode called that,
right should we be concerned a little?

Speaker 4 (41:01):
It was in there.

Speaker 5 (41:03):
You cannot make the joke, it's just it's begging to
be made exactly there.

Speaker 2 (41:07):
There's been all sorts of I don't know what you'd
call it, just alarm about the particle accelerator experiments that
are occurring at the Large Hadron Collider there that CERN runs.
There's several different research testing groups that are functioning there
that are using the Large Hadron Collider that you can

(41:28):
look a bunch of the stuff. It gets super complicated
very quickly. But there's been lots of people upset. They're
worried about what's going to happen when particles get accelerated
like that and hit each other over and over and
over and over and over again. So that scientists can
test what's going on in particle physics, right, are the
very tiny levels of matter? Right, the tiniest of levels.

(41:53):
Not quantum though that is that is very important. We're
not talking quantum physics. We're talking particle physics. Those things
are separate. Well, the LHC has done awesome things. It,
along with a bunch of other researchers, finally proved that
the Higgs boson was a thing something that was proposed. Gosh,
at this point fifty something years ago, they called the

(42:14):
god particle, the thing that potentially gives mass right to
parts of the particle that we didn't understand. It didn't
make sense. I don't understand it very well, but anyway,
it didn't make sense untill the Higgs came along and
they're like, oh, the Higgs makes this calculation work that
takes an entire wall to write out, but it's very cool.

(42:37):
It's probing the mysteries at the heart of the universe,
at the heart of existence, of reality and what we
look around and see and feel every day. It's tremendous science.
It's amazing stuff. Well, the folks at CERN, this large organization,
they proposed back in twenty nineteen the concept of building

(43:00):
another LHC, the next LHC. Oh, yeah, but an even
bigger one. We're talking much bigger guys.

Speaker 3 (43:09):
Do they call it supersized? Probably not.

Speaker 2 (43:12):
No, they call it the Future Circular Collider or the FCC. Yes, yes, yes,
so this thing is significantly bigger. So imagine a circle
and then overlay that over top of the border of
France and Switzerland right by Geneva. And imagine that that
circle is twenty seven kilometers round. I don't think that's

(43:36):
a radius or a diameter. I think that is like
the length, the full length of the circle if you
mapped it out.

Speaker 3 (43:44):
So like almost seventeen miles very close.

Speaker 2 (43:47):
I don't know how many miles it is, but it's
twenty seven kilometers for sure. But this one, this new one,
the Future Circular Collider, is going to be ninety one kilometers.

Speaker 5 (44:00):
Oh.

Speaker 2 (44:00):
It is circumference so all the way round ninety one
kilometers as opposed to twenty seven kilometers.

Speaker 3 (44:08):
It is huge.

Speaker 2 (44:09):
Yes, it's massive. It is going to cost twenty billion
euros to build the thing, and it won't come online
theoretically until the twenty forties. Okay, why are we talking
about it today? First of all, I don't think we
need to be afraid of CERN and the Large Hadron Collider.

(44:29):
There are videos you can find on TikTok and Instagram
and other places that allegedly show quote portals unquote being
opened above Geneva as the LHC is performing its experiments.
I have not found any of those to be credible yet.
At this point, I am just one person, but I

(44:49):
do not think those videos are what they're allegedly showing,
nor are many of them even videos of anything. They
are videos of a skyline or the the sky above
Geneva and some video effects. I've seen several that are
definitely that. I don't think we need to worry about that.
I think it's very amazing that this new collider would

(45:11):
be created to find, you know, what other secrets do
these particles hold. I think that's a great idea, But
it does seem like the critics have some points here
when they're saying, why would we spend twenty billion euros
on this? We have a particle collider, We've already found
some amazing things with it. Many critics are even saying

(45:34):
there are no other particles to discover. Guys, why do
you want to do this?

Speaker 3 (45:39):
Not with that attitude.

Speaker 2 (45:43):
I just want to maybe take the temperature of you. Guys.
How do you feel about maybe the state of particle
physics right now? Is it worth our time and money?

Speaker 5 (45:53):
Oh gosh, I mean again, not a particle physicist above
my pay grade. It seems like there could still be
stuff to learn that could help with energy type things.

Speaker 4 (46:05):
You know, I don't know.

Speaker 2 (46:06):
I would say yes, I think so too. That's that's
right where I'm at.

Speaker 3 (46:09):
Yeah, I agree. There's a there's a quote in a
Guardian article you shared Matt by Ian sample wherein there's
this there's this beautiful, beautiful thing from professor Fabiola Gianati
who says, if approved the FCC, the cool FCC, yeah,

(46:31):
would be the most powerful microscope ever built to study
the laws of nature at the smallest scales and highest energies,
And that that quote alone gets me so hype, because
that is you could argue that is one of the
great and noble duties of the human species is to
understand more about the nature of reality. And also, by

(46:54):
the way, just for us, us in the America's ninety
one kilometers is slightly is like fifty six and a
half miles. This thing is massive, and when I hear
that size, it makes me think, well, maybe it's not
that much money, which I never thought I would say.

Speaker 4 (47:12):
Well, but also you got to do the work this
kind of stuff at this scale, Like.

Speaker 5 (47:16):
It takes time to really pay dividends, and if you're
either in or you're out, And it's like, do we
as a species want to just say we don't really
care anymore? You know, it's too hard, you know, like
we feel I think that's absurd, and we know that
that the stuff can really yield incredible discoveries in science

(47:39):
and research, and the stuff that really matters is never easy.
So I don't know, I'm not sound like I'm soapboxing here,
but I just I do think that it would be
very short sighted to let a thing like a budget
and we've seen it happen though with like space exploration
and how it's all become privatized.

Speaker 4 (47:54):
Now, I don't know.

Speaker 3 (47:55):
Money is a fiction, right, it's.

Speaker 5 (47:58):
Certainly very short sighted to make that the bend all
be all, and we have the secrets of the universe
potentially in our grass twenty years and it's like.

Speaker 3 (48:07):
Nothing, especially in the in the geologic or astronomical scale.
But also I think one incredibly important point we have
to put out there is that if you are listening now,
if you are listening sometime in the current human age,
all of the cool stuff you like is a result

(48:28):
of applied science. And all of that applied science is
the result of thousands, like millions of people doing stuff
that was considered once upon a time simply abstract science.
So the fact that maybe it's that the current human
generations have been thought to think in terms of immediate payoff,

(48:52):
right the dopamine hit immediately for something, and that's just
that doesn't have to be the reality. So that's your question.
I do believe it is worthwhile. I believe it's noble.
It might be like a macrocosmic version of the Atlanta Beltline,
where everybody craps on the idea until it works and
then all of a sudden the whole time.

Speaker 2 (49:15):
Yeah, well, I think you listening out there may expect
this from a bunch of Discovery House stuff works boys.
You know, they grew up with science as a cornerstone
of how we view the world, and we're excited about
it and we're just in love with the concepts and
with progress and all these things. But there are people
way smarter than us, no fence, guys. You guys are brilliant,

(49:38):
but there are people who are way smarter than us
that look at this and go, hey, there's actually one
major problem here with this whole thing. Oh, I'm going
to introduce you all to somebody named doctor Sabine Hausenfeerde.
She is one of my new favorite people on the planet. Okay,
mostly for her YouTube channel, but she writes about some
really really cool stuff. When this this article was written,

(50:00):
which just a couple days ago, they cited her as
being she's from the Munich Center for Mathematical Philosophy, which
is see, well, it's a cool concept, right, mathematical philosophy.

Speaker 5 (50:11):
Oh that isause sort of combines something that you would
think more as a hard science with something a little
more conceptual.

Speaker 2 (50:18):
Yes, so she according to this Guardian article into just
a great article that she wrote that I will find
here was written for The Guardian as well. It's an
opinion piece titled no one in Physics dares say so,
but the race to invent new particles is pointless, And

(50:38):
her point is that one of the ways that you
get funding to perform experiments with a particle collider or
accelerator or anything like that is to have someone come
up with the hypothesis that this new particle might exist
and it would explain this anomal data point or these

(51:01):
series of points that seem anomalous in the data we've
been collecting over the period of twenty sixteen twenty seventeen
at the LHC. So then there's more funding that goes
to the group to do more particle acceleration, more particle collisions,
and then that particle which was just an hypothesis, It
never existed and it doesn't exist, but they still got

(51:21):
to do experiments and everybody involved got to write papers
that get peer reviewed by each other and the whole system.
We talked about the problems of peer review in the past,
and doctor hoff Hausenfelder says that is one major thing
that's occurring right now, and if we build this new
giant one, it's just going to continue down that path

(51:43):
when in reality, new particles are not really even what
we're looking for anymore, because the standard model of physics
kind of functions as it is. We understand most of it,
the dark matter, the dark energy, the things that many
scientists and the public I would say, are most interested in,
Like what the heck is that stuff? We've always heard

(52:03):
about it, what is it? It seems to be out
there filling up space and we just don't understand. She's
saying that particle physics is not the answer. It is
quantum physics, and we are in the age of quantum physics,
and that's where research money and time. Like these very
brilliant people who are they aren't just trying to grift

(52:23):
writing these papers and getting this stuff. It's like there's
a possibility, there's a one in a million chances that
they're correct. She's saying they should take their intelligence and
their time and apply it to quantum physics instead.

Speaker 5 (52:34):
But don't those people whose expertise lie in the particle
physics field believe that their field is where it's at, though,
like I mean, they surely aren't deluding themselves. Holy like
is in this scientists disagree all the time about what
area is most important.

Speaker 2 (52:50):
I completely guys, I don't know, and I completely agree.
I just really liked the way she's arguing this in here,
and she she lists out a crazy number of theoretical
particles that have been introduced and then completely thrown out
of the door. Yeah, well it's yeah, it's not even
their disprove it's like they're it's like, no, there's no
useless you know.

Speaker 3 (53:10):
What I mean, rendered, They're rendered irrelevant. But there's also
there's also a uh, there's a social piece of this.
I have great respect here for all the people involved.
There's also a social aspect to this that people outside
of academia may not be fully cognizant of or aware of,
which is that there are often turf wars, right, there

(53:33):
are ideological turf wars. And when you get to the
edge of physics, you get to the heart of something
very much like philosophy, right, you get to a kind
of a battle of opinion and theory.

Speaker 2 (53:47):
Uh.

Speaker 3 (53:47):
And and people can be quite territorial in that respect.
So it is I would argue, uh, incredibly difficult to
remove that kind of sense of identification. Like in one
of the articles you shared, Matt, Professor Hausenfelder uses the
phrase societal relevance, right, what does this mean to society?

(54:13):
More so like that that in itself, I think is
an illustrative phrase, because we have to realize that a
lot of it's like astronomer's fighting for observatory time. A
lot of this does come down to a sort of
turf war for lack of a better for lack of
a better way to articulate it. And again like also

(54:38):
going to go forward, not a particle physicist, not a
quantum physicist, anymore than the average person having a lucid dream.
But I will argue that this is one of those
things where you can make do with what already exists.
One thing we haven't talked about is the absolute windfall

(55:01):
this is economically for a lot of very powerful, very
interested players. When we talked about the scientists, we're talking
about the kids on the playground. We're not talking about
the people who get paid to build the playground, and
those folks very much want projects like this to happen.
You know what I mean? That is a future shifting

(55:22):
amount of funding. So there are clearly there are going
to be ulterior motives involved that go beyond the science,
and I do think it is a valid question as
to you know, with that much money, are there better
projects to put this into? Are there things that would
have a higher chance of yielding pioneering results. It's a

(55:48):
question we don't know the answer to yet, but it's
something that I would I don't know. I'm kind of
out of my depth here because this is the deep
ink of the fact of reality. But if Haustenfelder, if
we take these very valid questions, I wouldn't even call
them criticisms, very valid questions. If we take those to heart,

(56:12):
then one of the things that I would love to
hear there, Matt, is what should be done instead? Is
this already happening? Is it definitely? Definitely, the FCC is
going to be a thing.

Speaker 2 (56:26):
No. No, there's a whole process of basically proving that
this new collider would even function That ends in twenty
twenty five. Then they have to put forward a plan
for construction, which that should end sometime in twenty twenty eight,
and then in the twenty forties you would actually have
a theoretically a working particle accelerator a collider system. The

(56:49):
thing that gives me hope in all of this is
that those teams of incredibly intelligent people are planning already
for experiments that are going to occur in the twenty
seven So it's that thing where if everyone believes that
the planet is going to end at some point in

(57:09):
the near future, or humanity is going to destroy itself,
or climate change is going to repavoc to where humanity
is mostly decimated, if everybody's thinking that way, I don't know.
I don't know. I don't know how much I believe
in intention. But we've talked about that before in the past,
how you can kind of create your own outcome because
it it the way you think about the future shapes

(57:31):
how you react to the present, which doesn't mean you're
actually manifesting something, you know by thinking about it, but
by thinking about it, you are changing yourself and your interactions.
And just by having these people say, oh, yeah, in
the twenty seventies, we're going to start, you know, the
second phase, which is where we actually slam the protons together,

(57:53):
You're like, oh wow, really that twenty seventies, Okay, all right,
we're gonna be okay. I don't know why it does
that in my brain, but just thinking about the future positively,
I really.

Speaker 3 (58:05):
Sure that mind over matter is a real thing. I
mean agree, we did the we wind over dark matter.
We found really conclusive studies that show, uh, you know,
your intention, your thought can change things, just like the
interactions you allow from people around you, you know, because
that's another stimulus. So we are conscious thoughts are stimulus

(58:28):
exercised upon the brain, like you know, like the the
British cab drivers who memorize the entirety of London and
their hippocampus grows larger as a result, or the Buddhist
the Buddhist monks who uh the camera of the brain
region because I'm a monster, but the brain region associated
with compassion grows larger in them simply because of the

(58:52):
meditation and the way that they the way that they
think about the world and themselves. So I see what
you're saying. When it feels more real, whether we're gaslating ourselves,
it feels more real to have this thing, say the
twenty seventies and implicitly assume that everybody will make it
that far. I think that's beautiful.

Speaker 2 (59:11):
I think that piece is the Medulla oblongata. By the way,
I'm just kidding.

Speaker 5 (59:15):
I think we don't always have to be so fatalistic either,
you know what I mean. Like, it's so easy to
get caught up in the doom and gloom and annihilation
of everything, and you.

Speaker 4 (59:25):
Know, a little positive thinking goes a long way.

Speaker 5 (59:28):
It's another reason why I think people dig religion so much,
because it gives them something to funnel that energy into.
You know, that doesn't necessarily require answers be presented right
in front of you, So why don't we like to
follow that lead and exercise a little faith when it
comes to science. That's why I think it's interesting to
combine philosophy with science, because then you start to get

(59:50):
something more approaching, a more measured type of religion, a
more measured type of faith.

Speaker 2 (59:56):
And that's why I love doctor Sabine Hasenfelda. Check out
her YouTube channel. It goes by the name Science without
the Gobber Degouk, which I loved, and I just I've
been watching your videos all day. I think it's awesome.

Speaker 4 (01:00:13):
We'll check it out.

Speaker 2 (01:00:14):
That's all for now, oncern, let us know what you
think about this whole thing, particle physics creating them black
holes and portals and all that good stuff.

Speaker 3 (01:00:23):
Which has it happened yet?

Speaker 2 (01:00:24):
Well again, if it did, how would we know if
it changed the timeline? How would we know there's some
weirdness in there?

Speaker 3 (01:00:31):
Let us know what you think, Yes, let us know
what you think. Let us know if you work it
cern Let us know if you work in the realm
of physics, whether particle or quantum. Let us know the
last time you used a deep fake to trick a
huge company. Let us know also your opinions of the
death penalty. It's crucial at this point in time, and

(01:00:53):
all of these things are crucial. In fact, your participation
being foremost of those crucial crucial things. We try to
be easy to find online. We can't wait to hear
from you.

Speaker 4 (01:01:04):
Right, have you visited the quantum realm? What was it like?
Let us know.

Speaker 5 (01:01:08):
You can find us on the Internet where we exist
at the handle conspiracy Stuff on Facebook. We have our
Facebook group Here's where it gets crazy YouTube and also.

Speaker 4 (01:01:18):
X fka Twitter.

Speaker 5 (01:01:20):
We are conspiracy Stuff show on Instagram and TikTok.

Speaker 2 (01:01:24):
I've only been quantum once, and you better believe I
met Paul Rudd there he was till you.

Speaker 3 (01:01:30):
Never got quantum.

Speaker 4 (01:01:34):
Yeah, we got to, we to, we got to phrases.
We'll let the edit sort it out.

Speaker 3 (01:01:42):
I don't do a competition.

Speaker 2 (01:01:44):
Yeah, all right, here we go. Hey, we want to
call us. Our number is one eight three three s
T d W y t K. When you call in,
give yourself a cool nickname, and you've got three minutes,
say whatever you'd like. Please let us know if we
can use your name and message on the air. That
would be extremely helpful. If you've got more to say
than can fit in that three minute message, why not
instead send us a good old fashioned email.

Speaker 3 (01:02:03):
We are the folks who read every single email we
get conspiracy at iHeartRadio dot com.

Speaker 2 (01:02:28):
Stuff they don't want you to know. Is a production
of iHeartRadio. For more podcasts from iHeartRadio, visit the iHeartRadio app,
Apple Podcasts, or wherever you listen to your favorite shows.

Stuff They Don't Want You To Know News

Advertise With Us

Follow Us On

Hosts And Creators

Matt Frederick

Matt Frederick

Ben Bowlin

Ben Bowlin

Noel Brown

Noel Brown

Show Links

RSSStoreAboutLive Shows

Popular Podcasts

2. In The Village

2. In The Village

In The Village will take you into the most exclusive areas of the 2024 Paris Olympic Games to explore the daily life of athletes, complete with all the funny, mundane and unexpected things you learn off the field of play. Join Elizabeth Beisel as she sits down with Olympians each day in Paris.

3. iHeartOlympics: The Latest

3. iHeartOlympics: The Latest

Listen to the latest news from the 2024 Olympics.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2024 iHeartMedia, Inc.