Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Leslie Poston (00:12):
Welcome back to
PsyberSpace. I'm your host,
Leslie Poston. And today, we'retalking about why we're all
living in different realities.Alright. Quick question.
What sound do you hear when Isay b a? Ba. Simple. Right? You
heard me say ba.
But what if I told you that ifyou were watching someone say g
(00:34):
a, ga, while hearing that exactsame ba sound, most of you would
swear you heard d a, d a. Thisis called the McGurk effect, and
it's one of the most elegantdemonstrations that your brain
isn't a camera or a microphone.It's more of a prediction
machine. Your visual system,watching lips move, literally
(00:55):
overrides the sound waveshitting your eardrums. Your
brain sees gah, hears bah, andsplits the difference,
constructing dah.
About twenty percent of youwon't experience this effect at
all. Same sound, same video,completely different perception.
Because our brains areindividually calibrated
prediction engines, and we'reall running slightly different
(01:16):
software. There is anotherexample of this called
BrainStorm Green Needle. Samething where you have an audio
clip, but if you're expecting tohear BrainStorm, you hear
brainstorm.
If you're expecting to heargreen needle, you hear green
needle. Your expectationliterally changes the acoustic
experience. Today, we're goingto dismantle the idea that
(01:38):
perception is passive, that yourbrain is just faithfully
recording reality because it'snot. Your brain is constantly
generating predictions aboutwhat's coming next and then
checking those predictionsagainst incoming sensory data.
And then when there's amismatch, sometimes your brain
updates its prediction.
But often, it just overrides thedata and shows you what it
(02:02):
expected anyway. This means thatyou and I can witness the exact
same event. The same photonshitting our retinas, the same
sound waves hitting oureardrums, and can construct
genuinely fundamentallydifferent experiences of what
happened, not differentinterpretations, different
perceptions. Just a note, thisepisode is a bit longer than
(02:24):
usual since we're going to builda complete picture of how this
works. How your brain buildsreality from predictions.
How your body state, whetheryou're hungry, exhausted, or
anxious, literally tunes whatyou perceive. How your culture
gives you a specific set ofexpectations before you're even
aware that you're predictinganything, how memory isn't
(02:45):
recording but reconstruction,how neurotypes like autism or
conditions like schizophreniaand depression involve
differences in these predictionparameters and how algorithms
and deepfakes are nowmanipulating this aspect of our
brains in ways that haveprofound implications for our
society. This isn't just a funneuroscience fact. This is why
(03:07):
two people can watch the samevideo of a police encounter and
see completely different things,why eyewitness testimony puts
innocent people in prison, whyyou and your partner can
remember the same conversationtotally differently and both be
absolutely certain that you'reright. Why political
polarization feels sointractable.
(03:28):
Because we're not justdisagreeing about how to
interpret reality, we'reliterally inhabiting different
realities constructed fromdifferent information streams.
So buckle up. We're about tolearn how brains build worlds,
why you can't trust your ownperception, and what you can
actually do about it. So what'sactually happening when your
(03:49):
brain constructs reality? Thedominant framework in cognitive
neuroscience right now isbroadly called predictive
processing or active inference.
Some researchers also call itthe Bayesian brain hypothesis,
and there's a more mathematicalversion called the free energy
principle. These are all relatedideas that share a core insight.
(04:12):
Your brain isn't passivelyrecording the world. It's
actively predicting it. Here'show it works.
Your brain has a model of theworld built from everything
you've experienced up to thismoment. That model generates top
down predictions based oneverything I know. Here's what
should happen next. Meanwhile,bottom up sensory signals are
(04:34):
coming in. The actual photons,sound waves, and pressure on
your skin.
Your brain is constantlycomparing these two streams and
calculating prediction errors,the difference between what it
expected and what it got. Whenprediction errors are small,
your brain mostly shows you whatit predicted. When they're
(04:54):
large, it updates the model.This is perception. Learning.
It's how your brain minimizessurprise. But your brain doesn't
weight all signals equally. Ituses something called precision
weighting, essentially aconfidence dial on different
information streams. Highprecision means trust this
(05:16):
signal. Low precision meansprobably noise, ignore it.
This is why you can hear yourname across a crowded room even
when there's tons of othernoise. Your brain assigns high
precision to self relevantinformation. But it's also why
you can completely miss someonecalling your name when you're
deeply focused on your work. Inthat context, your brain has
(05:38):
turned down the precision onexternal audio. Or think about
reading.
You can read the typo, I loveParis in the the spring
correctly as I love Paris in thespring, completely missing that
the word the appears twicebecause your brain predicted the
sentence structure and didn'tbother to carefully process each
(05:59):
word. It assigned low precisionto the actual visual input
because its prediction was soconfident. The critical part to
this is that your brain doesn'tjust passively predict. It acts
to make its predictions cometrue. This is the active
inference part.
When you reach for a coffee cup,your brain isn't sending motor
(06:20):
commands to your arm. It'spredicting what it would feel
like to be holding the cup andthen minimizing the prediction
error between that predictionand your current proprioceptive
state. Your arm moves becauseyour brain is trying to make the
predicted sensation real. Thissounds abstract, but it has some
profound implications. It meansperception and action are
(06:43):
fundamentally the same process.
Both are about minimizingprediction error, and it means
that what you perceive isn'tsome objective readout of the
world. It's your brain's bestguess about what's out there
based on its model weighted byits confidence in different
signals. Now, I should mention,this framework is very
(07:03):
influential and well supportedby a lot of neuroscience
research, but it's not withoutdebate. Some researchers argue
it's too broad, or that we stillneed to work out the details of
exactly how the brain implementsthese computations. And there
are complementary frameworksthat emphasize the role of the
body and environment even more.
(07:24):
But for understanding why weconstruct different realities,
this predictive processing lensgives us a powerful set of
tools. Remember the McGurkeffect from a minute ago? That's
precision waiting in action.When visual and auditory signals
conflict, your brain has todecide which one to trust more.
For most people in clear viewingconditions, visual speech gets
(07:47):
weighted heavily.
So the visual gah overrides theauditory bah, and you hear dah.
But for that twenty percent whodon't experience the effect,
their brains are weighting thesignals differently. Same input,
different precision settings,different reality. Now where did
these predictions come from?They come from your priors,
(08:07):
everything your brain haslearned about how the world
works.
You've learned that objects falldown, not up, that faces have
two eyes, that words followcertain patterns. These priors
shape every prediction yourbrain makes. And this is where
things get a little more complexbecause priors aren't universal.
They're shaped by your specifichistory of experiences, your
(08:30):
location, your culture, yourbody, even your attention
patterns, which means we're allrunning slightly different
prediction models on the samesensory input. We'll unpack
exactly how that works in thenext segments, but for now,
here's the key takeaway.
Your brain is a predictionmachine that's constantly
generating expectations,weighting incoming signals by
(08:52):
confidence, and constructingyour perceptual experience from
that interaction. You're notseeing the world as it is you're
seeing your brain's besthypothesis about what's out
there. And that hypothesis canbe very, very different from
someone else's. So we'veestablished that your brain is
(09:12):
constantly predicting what comesnext. Now let's talk about what
determines which predictions getmade in the first place.
The answer is attention.Attention and the framework
we're using today is essentiallythe allocation of precision.
What you attend to is what yourbrain decides to weight heavily
in its prediction errorcalculations. What you ignore
(09:35):
gets low precision, treated asnoise. This has a striking
consequence.
If you're not predictingsomething, you might not
perceive it at all, even if it'sright in front of you. One of
the more famous demonstrationsof this is the invisible gorilla
study. People watched a video ofstudents passing basketballs and
counted the number of passes.Midway through, someone in a
(09:58):
gorilla suit walks through thescene, stops, beats their chest,
and walks off. About half ofviewers completely missed it.
Not didn't notice it, literallydid not see it. Because they
were predicting basketballpasses, their attention
allocated precision to ballmovements. And the gorilla,
totally unexpected, didn't evenmake it into their constructed
(10:20):
reality. Or take changeblindness. People miss huge
changes to images, a planeengine appearing or
disappearing, buildings changingcolor.
If the change happens during abrief disruption. You're not
predicting a change, so yourbrain doesn't allocate the
precision needed to detect it.There's also something called
(10:41):
attentional blink. If you'rewatching for a target in a rapid
stream of images, and a secondtarget appears within about a
half a second of the first,you'll often miss the second one
completely. Your predictionmachinery is still processing
the first target and hasn'tallocated precision to detecting
another one yet.
Here's why this matters. Thisisn't just about visual tricks.
(11:04):
This is about how belief shapesperception at a fundamental
level. If you're predicting orexpecting certain things, you'll
notice evidence for them. Ifyou're not predicting something,
even clear evidence might notregister.
This is confirmation bias, butnot the way we usually think
about it and not the way we'vetalked about it on previous
(11:25):
episodes. It's not that you'reconsciously ignoring
contradictory evidence. It'sthat your attention, your
precision allocation isliterally preventing that
evidence from becoming part ofyour perceptual experience.
You're just not seeing it in thefirst place. So here's the
implication.
Eyewitness testimony, which werely on in courtrooms and news
(11:47):
reporting and everydaycredibility judgments, is deeply
unreliable. If a witness wasn'texpecting to see something, even
something important, even ifit's right in front of them,
their brain may not haveconstructed that perceptual
experience. They're not lyingwhen they say they didn't see
it. It genuinely wasn't in theirreality. Their attention
(12:09):
spotlight didn't illuminate, sotheir prediction engine didn't
build it.
And this is happening to all ofus all the time. Right now,
you're missing most of what's inyour environment because your
brain isn't predicting it, isn'tallocating precision to it.
You're perceiving a tiny curatedslice of available information,
(12:30):
the slice your currentpredictions and attention
patterns have constructed.That's one reason our attention
is such a hot commodity in thedigital programs, apps, games,
and platforms that we interactwith. Different predictions,
different attention, differentslice, different reality.
Now let's add another layer.Your body is constantly voting
(12:53):
on your reality. There's thisconcept in neuroscience called
interoception, your brain'smodel of your body's internal
state. How fast is your heartbeating? How much energy do you
have?
Are you in pain, hungry,anxious? These aren't just
background sensations. They'resignals that directly tune your
prediction machinery. Researchshows that people who are better
(13:16):
at detecting their ownheartbeat, what's called
interoceptive accuracy, showmeasurably different responses
to threat and ambiguity. Theirprediction engines are getting
different information from theirbodies, which changes how they
weight external signals.
If your heart is racing, yourbrain upweights predictions
about threat. Ambiguous faceslook angrier. Neutral situations
(13:40):
feel more dangerous. This isn'tbias in the sense of being wrong
it's Bayesian inference. Aracing heart is evidence that
something might be threatening,so your brain rationally updates
its predictions.
But your heart might be racingbecause you just climbed stairs
or had an extra cup of coffee ordidn't sleep well. Your
(14:01):
prediction engine doesn'tnecessarily know why. It just
knows elevated heart rate.Better be alert for threats. So
it turns your precision settingsaccordingly, and suddenly your
perceptual reality has morethreat in it.
Hunger works the same way. Whenyou're hungry, food related
words literally pop out fasterin visual search tasks. Your
(14:23):
brain has allocated higherprecision to food cues. You're
not choosing to notice foodmore. Your prediction machinery
has been retuned by your bodystate.
Sleep deprivation shiftsinterpretation of ambiguous
stimuli toward negativereadings. Anxiety cranks up
threat detection. Physical painincreases irritability and
(14:44):
decreases patience, which arereally just shifts in how your
brain is weighting socialpredictions. Even your body
mapyour sense of where your bodyis and what it can dois
constructed through prediction.The rubber hand delusion
demonstrates this beautifully.
If you watch a rubber hand beingstroked while your real hand is
(15:05):
stroked in sync, your brainstarts to predict that the
rubber hand is yours. Itincorporates it into your body
model. You'll actually flinch ifsomeone threatens the rubber
hand. Your temperature willchange in the real hand if the
rubber hand is put in ice. Yourconstructed reality now includes
a rubber hand as part of yourbody.
(15:27):
And then there's the placebo andnocebo effects. If you believe a
pill will reduce your pain, yourbrain predicts reduced pain,
which actually changes yourphysiological pain processing.
The prediction literally altersthe reality. Same with nocebo
effect. If you expect sideeffects, your brain predicts
(15:49):
them and often produces them.
So before you make an importantdecision, check your body state.
Are you hungry? When did youlast sleep? Are you anxious
about something unrelated? Thisis one reason judges give
harsher sentences before lunch,and medical residents make worse
diagnostic decisions aftertwenty four hour shifts.
(16:09):
Not because they're bad peopleor bad doctors, but because
their bodies are tuning theirprediction engines toward
different settings. Yourstomach, your heart rate, your
sleep debt, they're all votingon what reality you construct.
And most of the time, you haveno idea it's happening. So we've
established that your perceptionof the present is a construction
(16:31):
based on predictions. Here's aslightly unsettling note.
Your memory of the past is too.Memory is not a recording it's a
reconstruction. Every time youremember something, your brain
is rebuilding that experiencefrom fragments, using your
current priors to fill in thegaps. Decades of research have
(16:54):
demonstrated this with themisinformation effect. Show
people a video of a car accidentand later ask half of them how
fast were the cars going whenthey hit each other, and ask the
other half how for how fast werethe cars going when they smashed
into each other.
The smashed group remembers thecars going faster. They're also
(17:16):
more likely to remember seeingbroken glass that wasn't there.
The words smashed activateddifferent priors, more violent
priors, which were thenincorporated into the
reconstructed memory. You cannudge people into entirely false
memories with the rightsuggestion. People have been
convinced they were lost in ashopping mall as a child when it
(17:38):
never happened.
Convinced they saw Bugs Bunny atDisneyland, which is impossible
because Bugs Bunny is WarnerBrothers. The mechanism for this
is called memoryreconsolidation. Research shows
that every time you recall amemory, it becomes briefly
unstable, modifiable beforebeing stored again. Every recall
(17:59):
is a rewrite opportunity, whichmeans the more you remember
something, the more chancesyou've had to edit it. And
you're editing based on yourcurrent priors, your current
emotional state, and yourcurrent narrative about
yourself.
This is why two people canremember the same conversation
completely differently and bothbe absolutely certain that
(18:21):
they're right. They're notarguing about what happened.
They're arguing about twogenuinely different
reconstructed memories, bothbuilt from fragments and
predictions. Studies onflashbulb memories, those vivid
memories of exactly where youwere during major events, show
this clearly. Nineeleven, forexample.
People are incredibly confidentin these memories, and they're
(18:44):
incredibly wrong. Whenresearchers compare people's
immediate reports to theirmemories years later, the
details have changeddramatically, but the confidence
has not. Confidence is notcorrelated with accuracy and
memory at all. So here's whatthis means practically: In legal
proceedings, therapy, familydisputes, anywhere the past
(19:05):
matters, we need to rememberthat remembering is a creative
act. It's your prediction enginerunning backwards, filling in
gaps with what seems plausiblenow based on who you are now.
Your memory is not lying to you.It's doing exactly what it
evolved to do, constructing acoherent narrative from
incomplete information. But thatnarrative is a construction, not
(19:28):
a recording. And every time youremember, you're reconstructing,
editing, and rewriting. Yourpast, like your present, is a
prediction.
So we're halfway through oursegments today, and we've seen
how attention, body state, andmemory shape constructed
reality. Now let's talk abouthow culture preloads your
(19:49):
prediction engine before you'reeven aware you're perceiving
anything. Language is a big partof this. There's research on
Russian speakers that's reallyrevealing. Russian has two
separate words for light blueand dark blue, and apologies if
I'm butchering thepronunciation.
Those words are and with noumbrella term for blue. English,
(20:12):
however, lumps them together.When you show Russian speakers
two shades of blue and ask themto distinguish them quickly,
they're faster if the shadescross the boundary than if
they're both the same category.English speakers show no
difference. Now Russian speakerscan obviously perceive the
difference between shades ofblue within a category.
(20:33):
It's not that the languagecreates the ability to see, but
the language shapes attentionallocation, which differences
get high precision, which getlow. The linguistic categories
tune your prediction machineryto notice certain boundaries
more readily. And this extendsfar beyond color. Language has
been described as cognitivetechnology, a tool that
(20:56):
literally tunes your predictionengine. The words you have
available shape whichpredictions you generate easily
and which require effort.
Culture shapes predictions evenmore broadly. Research shows us
that when viewing scenes, EastAsian participants encode more
contextual information,relationships between objects,
(21:17):
backgrounds, and settings, whileWestern participants focus more
on individual objects. Here'ssomething else to note. Almost
everything we know about howhuman perception works comes
from studies on what we callweird population. That's weird
in all caps.
It stands for Western, educated,industrialized, rich,
(21:38):
democratic. This was documentedin a 2010 paper I'll put in the
show notes. This means that asof 2010, 96% of psychology
research participants came frompopulations representing 12% of
humanity. And weird populationsare statistical outliers on many
(21:58):
measures. They're moreindividualistic, more analytical
in visual perception, differentmoral reasoning, and even
different spatial cognition.
We've built an entire field bystudying the exceptions and
calling them universal. Then weexport findings, design
technology, education systems,healthcare protocols based on
(22:21):
this research, and we actsurprised when they don't work
elsewhere. This isn't just anacademic problem. When AI is
trained on data from weirdpopulations, it encodes weird
priors. When medical diagnosiscriteria are based on how
symptoms present in Westernpopulation, other presentations
get missed.
(22:42):
When universal design principlesreflect Western attentional
patterns, they're creatingbarriers for everyone else. Your
culture doesn't just influencehow you interpret reality. It
preloads the predictionmachinery that constructs
reality in the first place. Itgives you a specific set of
priors about what's important,what to attend to, and how to
(23:04):
carve up experience intocategories before you're
conscious of perceivinganything. You inherit a reality
model, and that model may beradically different from someone
else's, not because one iswrong, but because you've been
trained on different data.
Let's talk about what happenswhen the parameters of that
(23:25):
prediction machinery are setdifferently at a more
fundamental level. I want to bereally careful with framing
here. We're not talking aboutbroken brains. We're talking
about different precisionsettings, different updating
dynamics, different predictionparameters, different, not
deficient. Let's start withpeople operating with an
(23:47):
autistic neurotype, because thisis where the research has really
evolved recently.
For a long time, the dominanttheory was weak priors or
attenuated priors, the idea thatautistic brains don't weight
past experience heavily, leadingto overreliance on current
sensory input. But recentresearch from 2025 shows it's
(24:09):
even more nuanced than that. Thefindings indicate that autistic
adults don't have weak priors.They have different updating
dynamics. Specifically, theyrely more heavily on sensory
input when iteratively updatingtheir beliefs about what's
happening.
This leads to slower adaptationearly in a session. It takes
(24:30):
longer to build up stablepredictions, but eventually
reaches similar integration toneurotypical brains. It's not
that the priors are weak, It'sthat the updating algorithm is
just tuned a little differently.And this explains a lot. It
explains why unexpected changescan be so overwhelming.
If your updating dynamics areslower, of course, you need more
(24:52):
time to integrate new patterns.It explains why routines are so
important. Routines reduce theneed for constant updating. But
it also explains exceptionalabilities in pattern detection
and detail orientation. Ifyou're weighting sensory input
more heavily, you're catchingdetails others miss.
For psychosis and schizophrenia,the picture is different. Recent
(25:17):
research here suggests aberrantprecision weighting,
specifically overconfidentpredictions that override
sensory input. Hallucinationsmay be predictions that are
weighted so heavily, assignedsuch high precision, that
they're experienced asperception even without
corresponding sensory input. Theprediction becomes the reality.
(25:41):
Delusions may be high precisionpriors that are resistant to
prediction error.
New evidence should update thebelief, but the prior is
weighted so heavily that thebrain explains away the
contradiction instead. And fordepression, research shows
systematically negative priorsbiasing interpretation of
ambiguous information. Thisisn't just pessimism or thinking
(26:05):
negatively. It's Bayesianinference with systematically
biased input weights. Whensomething ambiguous happens,
someone doesn't text back, aproject has mixed results, A
social interaction is unclear.
A depressive brain might predictnegative interpretations with
higher confidence and thenselectively attend to prediction
(26:27):
errors that confirm thosenegative predictions, further
entrenching them. Anhedonia, theinability to feel pleasure, may
be reduced precision on rewardprediction. If your brain
assigns low confidence topredictions about positive
outcomes, you stop generatingthese predictions, which means
you stop acting to pursuerewards. So here's why this
(26:50):
framework matters clinically. Ifautistic sensory overwhelm comes
from different updating dynamicsrather than a broken system,
accommodation should focus onreducing the need for rapid
updating.
Predictable environments, cleartransitions, adequate processing
time, clear instructions, notforcing neurotypical updating
(27:10):
speed. For depression, ifnegative priors are the core
issue, treatment would need todo more than think positive. It
needs to systematically retrainthe prediction engine with
experiences that generatepositive prediction errorsactual
evidence that contradicts thenegative predictionsand that's
weighted heavily enough toupdate the priors. Understanding
(27:33):
these as differences inpredictive processing
parameters, not deficits,suggests new intervention
targets and more respectful,effective support. All right.
So we've built up fromindividual prediction engines
through body states, culture,and clinical differences. Now
let's talk about what happenswhen your information
(27:55):
environment gets deliberatelymanipulated. First, algorithmic
priors. Your social media feed,your search results, your
recommended content. Thesearen't neutral windows into
reality.
They're personalized predictionengines trained on your past
behavior, and they function likeexternal priors. They curate
(28:17):
what prediction errors youencounter. If your feed
consistently shows you economicrecovery stories and someone
else's feed shows recessionstories, you're not just getting
different news. You're gettingdifferent evidence bases. And
from those different evidencebases, you're rationally
constructing differentpredictions about the economy.
(28:37):
This is going beyond filterbubbles and echo chambers. These
are reality bubbles. Differentsensory diets creating different
priors, creating differentrational conclusions. Research
from 2018 found that exposingpeople to opposing viewpoints on
social media sometimes increasespolarization rather than
reducing it. Why?
(28:58):
Because if the prediction erroris too large, if the information
is too far from your existingpriors, your brain struggles to
integrate it. It gets rejectedas noise or explained away.
Recent work shows that sharingitself can become habitual, an
automatic behavior that bypassesaccuracy evaluation. Your
(29:20):
precision waiting on truthdeclines when sharing is driven
by habit rather thandeliberation. Let's talk about
virtual and augmented reality.
Why does VR feel real? Becauseit provides sensory motor
contingencies that match yourpredictions. You turn your head,
the visual world updatesaccordingly. You reach out and
(29:43):
you see your hand move. Yourprediction errors are minimized,
so your brain accepts thevirtual environment as real.
This is called place illusion,the sense of being in a
location, and it dependsentirely on your prediction
machinery being satisfied. Whensensory motor contingencies
break, where there's lag ormovements don't match
(30:04):
predictions, presence collapses.This has therapeutic potential.
VR exposure therapy worksbecause your brain treats the
virtual spider or virtual heightas real enough to generate a
fear response and, throughrepeated exposure, potentially
update threat predictions. Italso means that we can hack
(30:25):
presence.
We can create experiences thatfeel completely real even though
they're entirely synthetic. Andthat brings us around to deep
audio and video that's beensynthesized or manipulated to
show something that neverhappened. And here's what
matters. A 2024 meta analysisfound that humans are at
(30:46):
approximately chance accuracyfor detecting high quality
deepfakes. That means fiftyfifty, a coin flip.
Voice cloning is even harder todetect than visual deepfakes
because your predictionmachinery expects voice to match
identity, and when it does, youtrust it. Why do deepfakes work
so well? Because they provideexactly the sensorimotor
(31:09):
contingencies your predictionengine expects. Lips sync up
with words. Prosody matchesemotion.
The prediction errors areminimal, so your brain
constructs this is real. Theconsequences are severe. Deep
fake pornography overwhelminglytargets women creating synthetic
sexual content without consent.Voice clones are used for fraud,
(31:32):
scammers calling elderly people,faking a grandchild's voice,
asking for bail money. Politicaldeepfakes create false evidence
of statements or actions thatnever occurred.
There are regulatory effortsemerging. Open letters urging
deepfake regulation werepublished in 2024, but overall
legislation is far behind thetechnology. And here's the
(31:56):
uncomfortable truth. If you'reconfident you can spot a
deepfake, you're probably wrong.Your prediction engine evolved
to trust sensory motorcontingencies, and deepfakes
exploit exactly that trust.
Meanwhile, algorithms arefeeding you a personalized
evidence diet that makes yourprior beliefs seem increasingly
(32:17):
obviously correct. You're not ina filter bubble. You're in a
reality bubble where theprediction errors you encounter
have been curated to confirmyour existing model. And when
democratic society depends on ashared factual foundation, when
we need to agree on whathappened before we can debate
what to do about it, this ispotentially an existential
(32:39):
crisis. We're not justdisagreeing about
interpretation.
We're constructing fundamentallydifferent realities from
corrupted, manipulatedinformation systems. So what
happens when people withgenuinely different constructed
realities try to communicate?First, understand that
(33:00):
collective reality constructionis a thing. Groups synchronize
their priors through sharedattention, repeated narratives,
and social learning. They watchthe same media, hear the same
stories, attend to the samecues, and gradually their
prediction engines align.
This is how cultures form andhow communities function. Shared
(33:22):
reality enables coordination.But it also means that when
information environmentsdiverge, group realities
diverge. On a small scale, thisshows up in relationships. You
never told me that fights aren'tusually about lying.
They're about failed sharedreality construction. One person
constructed a memory of tellingyou. The other person didn't
(33:44):
construct a memory of beingtold. Both reconstructed
memories feel absolutelycertain. You're not arguing
about what happened.
You're arguing about twodifferent reconstruction.
Couples therapy in thisframework is partially about
prior alignment, getting twoprediction engines to construct
more similar realities goingforward. But the stakes get much
(34:05):
higher when we zoom out. There'sa concept in philosophy called
epistemic injustice where somegroups' knowledge and testimony
are systematically discounted ordismissed. Medical gaslighting
of women is a clear example ofWomen report pain.
Doctors, whose priors aboutcredible pain presentations were
trained mostly on male patients,assign low precision to those
(34:28):
reports. The women's constructedreality includes severe pain.
The doctor's constructed realityincludes exaggeration or
psychological causes. Differentpriors, different precision
weighting, different realities.And the one with institutional
power gets enforced as truth.
This happens in policeinteractions as well. Different
(34:50):
priors about threat mean thatofficers and civilians construct
genuinely different realitiesfrom the same encounter. An
officer's prediction enginetrained on threat indicators
assigns high precision tocertain movements, certain
demographics. A civilian'sprediction engine trained on
everyday behavior constructsthose same actions as
(35:11):
nonthreatening. Both realitiesfeel obvious.
Both feel objective. It alsohappens in workplaces. Cultural
priors mean that identicalbehavior gets constructed as
assertive from one person andaggressive from another, as
thorough from one demographicand nitpicky from another, and
(35:31):
it scales to entire politicalsystems. Truth decay, the
collapse of shared factualfoundations, happens when
information ecosystems divergeenough that groups are
constructing incommensurablerealities, not just different
interpretations of agreed uponfacts, different facts,
different evidence, differentrational conclusions from
(35:52):
different constructed worlds.And here's what's critical to
understand.
This asymmetry in whose realitycounts as real is a primary
mechanism of oppression. Whendoctors don't believe women's
pain, when police see threat andmoral behavior, when indigenous
knowledge gets dismissed asanecdotal while Western science
(36:13):
is always trusted as objective,when non weird perceptual
patterns are pathologized,that's not bias in the usual
sense. That's one group'spriors, one group's precision
settings, one group'sconstructed reality being
enforced as objective truth.Everyone else's realities get
marked as subjective,unreliable, and mistaken. So
(36:35):
political polarization isn'tjust that we disagree about
solutions.
It's that we're constructingdifferent realities from
different information streamsfiltered through different
cultural priors, weighted bydifferent attention patterns.
You're not convincing someone tointerpret the same thing
differently. You're trying toconvince them to see a different
thing altogether, and whosereality gets to be reality is
(36:59):
always a question of power.Alright. We've spent nine
segments dismantling the ideathat perception is objective.
So what do we actually do withthis? Because you can't step
outside your prediction engine.You can't perceive without
predicting. But you can makeyour brain more flexible, more
(37:21):
accurate, and more open toupdate. Here are five concrete
skills.
First, name your priorsexplicitly. Before you form an
opinion on something, forceyourself to articulate, I'm
predicting x because of priorexperiences y. Just making them
visible makes them revisable. Ifyou can't name the priors
(37:44):
underlying a belief, that's ared flag that you're running on
autopilot. Second, ask, whatwould I notice if I were wrong?
This forces you to specify whatprediction errors would actually
update your belief. If you can'tanswer, no evidence could change
your mind, that's not a prior.It's an unfalsifiable ideology.
(38:07):
Real priors generate testablepredictions. Third, the reality
check ritual.
Before important decisions, runthrough this. Check your body
state. Are you hungry,exhausted, or anxious? Those are
tuning your precision settingsright now. Then list your
current priors explicitly.
(38:28):
Then seek one strong source thatchallenges them, not to abandon
your belief, but to generate agenuine prediction error and see
if your model survives contactwith it. Four, steel man
listening. When someone seemscompletely unreasonable, ask
yourself. What priors would maketheir response rational? Try to
(38:48):
reconstruct their predictionengine, not to agree with them,
but to understand that they'rerunning a different model on
different data and gettingoutput that makes sense within
that framework.
Fifth, update publicly. When youchange your mind, say so. Model
that updating on predictionerrors is intellectual strength,
(39:11):
not weakness. We need tonormalize prior revision. For
media hygiene specifically,diversify your feed
algorithmically.
Use a trustworthy VPN and trysurfing the Internet from
different countries to get abroader perspective in your
feed. Actively follow sourcesthat challenge your existing
(39:31):
priors. Follow at least oneperson who annoys you but
engages honestly. Before youshare something, pause and do
one disconfirming search.Actively look for the strongest
case against what you're aboutto share.
Seek original sources instead ofaggregators. In fact, I
(39:52):
recommend actively blockingaggregator accounts and AI
generated content accounts onsocial media. And periodically,
do information fasting.Disconnect entirely for a day or
so just to reset youralgorithmic baseline. And now
let's talk ethics because thisall has life or death stakes.
(40:13):
In health care, women andminorities get diagnosed slower
and treated less aggressivelybecause of biased priors about
credible symptom presentation.Pain management shows massive
disparities. Mental healthconditions get misdiagnosed
across cultures becausediagnostic criteria encode weird
priors. We need systematicdebiasing training for
(40:36):
clinicians and diverse data setsfor AI diagnostics. In criminal
justice, eyewitness testimony isputting innocent people in
prison because we don'tunderstand memory
reconstruction.
Racial bias and threatperception by police and juries
leads to different constructedrealities of the same encounter
and unjust outcomes. We needjudicial education on perception
(41:01):
science, body camerarequirements, and reform of how
eyewitness testimony isweighted. In democracy,
electoral misinformation anddeepfakes are creating
manufactured realities that areindistinguishable from authentic
evidence. Epistemic polarizationis preventing any shared factual
foundation for deliberation. Weneed media literacy education
(41:24):
starting young, regulation ofsynthetic media, and platform
accountability for algorithmicamplification.
And there's a consent crisis weneed to name explicitly.
Deepfake pornography is areality violation. Someone's
face and voice used to constructsynthetic experiences they never
(41:44):
participated in or consented to.Voice cloning enables fraud that
exploits trust. We need a rightto your reality framework and
criminal penalties for maliciousdeep vics.
Here's the ethical core.Epistemic humility doesn't mean
all realities are equal. Somemodels are better calibrated to
predictive success than others.Science works because it's a
(42:08):
systematic method for updatingpriors based on prediction
errors. But epistemic humilitydoes mean recognizing that your
reality is a construction, notthe territory, that someone
else's different constructionisn't necessarily wrong.
It may be built from different,equally valid inputs. And that
when your constructed realityconsistently aligns with
(42:30):
institutional power while othersdon't, you have an obligation to
examine whether your priors areactually superior or just
dominant. Remember where westarted? The McGurk effect. You
heard duh because your brainintegrated visual and auditory
streams.
Now you know that integration isshaped by your body, your
(42:51):
culture, your attention, yourpast experiences, your
information environment, andwhether algorithms have decided
what you should see. Differentinputs, different priors,
different predictions, differentconstructed realities. The
hopeful part? Once youunderstand that perception is
constructed, you can startexamining the construction
(43:11):
process and adjust your priors.You can diversify your inputs,
and you can check your bodystate before decisions.
You can seek disconfirmingevidence and recognize when
someone else's reality, even ifit's different from yours, is
built on a foundation you'd findrational if you had their data.
Your action item for the week.Pick one belief you hold
(43:33):
strongly. Write down the priorsunderlying it. What experiences
shaped those priors?
What information streammaintains them? And then ask,
what would I need to see toupdate this belief? And then
actively seek one high qualitysource that challenges it. Not
to abandon that beliefnecessarily, but to test whether
(43:56):
it survives contact with genuineprediction errors and to see if
your mental model is robust orbrittle. Because every unjust
verdict based on flawedeyewitness testimony, every
medical dismissal of reportedpain, every algorithmic
deepfake, every unnecessary useof force based on misperceived
(44:16):
threat, these are failures ofreality construction with life
or death consequences.
Thanks for listening to thisepisode of Psyber Space. I'm
your host, Leslie Poston,signing off. And remember, we're
all living in differentrealities, but we don't have to
stay trapped in them. Staycurious to avoid that trap. Oh,
and don't forget to subscribe sothat you don't miss a week.
(44:37):
And send this to a friend if youthink they'd like a reality
check.