All Episodes

October 3, 2025 28 mins

The Transparency Trap: How AI Disclosure Erodes Trust

In this special episode of PsyberSpace, host Leslie Poston explores a new study revealing that people who disclose AI use in professional settings are trusted significantly less than those who keep it a secret. This phenomenon is linked to identity protective cognition and professional identity threats. The discussion delves into how legitimacy and social norms shape trust dynamics, the role of cognitive dissonance, and systemic issues that exacerbate the AI transparency crisis. Poston also offers potential strategies to address these challenges, emphasizing the need for a cultural shift in professional identity and transparent AI integration.

00:00 Introduction to Today's Unique Episode
01:41 The Transparency Dilemma Study
03:32 Understanding the Legitimacy Discount
04:07 Identity Protective Cognition and AI
06:29 The Role of Professional Identity
09:32 Moral Licensing and Cognitive Dissonance
19:35 Systemic Issues and Forced AI Adoption
22:06 Strategies for Cultural and Institutional Change
25:28 Conclusion and Broader Implications

References

Schilke, O., & Reimann, M. (2025). The transparency dilemma: How AI disclosure erodes trust. Organizational Behavior and Human Decision Processes, 188, 104405. 

Festinger, L. (1957). A theory of cognitive dissonance. Stanford University Press.

Tajfel, H., & Turner, J. C. (1979). An integrative theory of intergroup conflict. In W. G. Austin & S. Worchel (Eds.), The social psychology of intergroup relations (pp. 33–47). Brooks/Cole.

Lamont, M. (1992). Money, morals, and manners: The culture of the French and the American upper-middle class. University of Chicago Press.
(see past episodes for more)

★ Support this podcast ★
Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Leslie Poston (00:12):
Welcome back to PsyberSpace. I'm your host,
Leslie Poston. Usually, wereview several studies relating
to a theme or topic, but todaywe're doing things a little
differently. Today, we're divinginto one new research finding
that reveals something deeplyuncomfortable about human
psychology. Imagine you're beinghonest.

(00:35):
You disclose that you usedgenerative AI to help with a
work task. Your colleaguedoesn't disclose, but secretly,
they used AI too. Who getstrusted more? If you guessed the
honest person, you'd be wrong. Amassive new study with over
3,000 participants across 13different experiments found that

(00:56):
people who admit to using AI aretrusted significantly less than
those who stay silent, even whenthe silent ones are secretly
using AI themselves.
And here's the kicker. Thepeople who use AI in secret are
often the harshest judges ofthose who admit it. This isn't a
story about AI. It's a storyabout identity, legitimacy, and

(01:19):
why our psychology createsperverse incentives that punish
honesty and reward deception.Today, we're going to unpack why
this happens, what it revealsabout how we construct
professional identity, and whythis transparency trap is almost
psychologically inevitable.
Let's get into it. Let me tellyou about the study that kicked

(01:44):
off this whole exploratoryepisode. Oliver Schilke and
Martin Reimann publishedresearch this past month called
The How AI Disclosure ErodesTrust in the journal
Organizational Behavior andHuman Decision Processes. They
conducted 13 preregisteredexperiments with over 3,000

(02:05):
participants in multiplecontexts: classrooms, hiring,
investment decisions, andcreative work. The findings were
consistent and striking.
In one study with 195 students,professors who disclosed using
AI for grading were ratedsignificantly less trustworthy,
scoring 2.48 on a seven pointscale compared to professors who

(02:30):
disclosed using a human teachingassistant who scored a 2.87 or
professors who made nodisclosure at all who scored a
2.96. That's a large effectsize, and it's not a small
difference in perception.Another study with four twenty
six participants, startupfounders who disclosed AI use

(02:51):
received trust scores averaging4.55 out of seven, while
founders who made no disclosureaveraged 5.63. That's more than
a full point drop just for beinghonest about AI use. And here's
what makes this even moreinteresting.
The researchers testedeverything. They tried different

(03:11):
ways of framing the disclosure.They tested whether it mattered
if people already knew AI mightbe involved. They looked at
voluntary versus mandatorydisclosure policies. None of it
prevented the trust erosion.
Being honest about AI useconsistently resulted in lower
trust regardless of how it wasframed. The researchers called

(03:33):
this the legitimacy discount.When you disclose AI use, people
perceive your work as lesssocially appropriate, less
legitimate, even if the qualityof the work itself seemed
identical. And this isn't aboutalgorithm aversion, which is
when people distrust AI systemsthemselves. This is different.

(03:53):
This is about people distrustingyou when you admit to using AI
even more than they woulddistrust the AI operating alone.
So why does this happen? That'swhat we're unpacking today. In a
previous episode of PsyberSpacecalled Mind Locked, I talked
about identity protectivecognition, how our brains

(04:15):
respond to challenges to ourbeliefs, not with rational
evaluation, but with threatresponses. When core beliefs
become part of our identity,contradictory information
activates brain regionsassociated with self
preservation and socialidentity, not the regions
involved in logical reasoning.
The AI transparency trap is atextbook case of this mechanism.

(04:40):
When someone discloses AI use,observers aren't simply asking,
is this work good? They'reasking, what does this mean
about me? If AI assisted work isconsidered legitimate,
professional quality work, thenwhat does that imply about the
purely human work that I do? AmI suddenly less valuable?
Less skilled? Less necessary?This is an identity threat.

(05:04):
Professional identity forknowledge workers is built on a
foundation of human cognitivelabor our thinking, our
creativity, our expertise. AIthreatens that foundation.
But here's the thing. As long aseveryone pretends AI isn't being
used or at least doesn't talkabout it, everyone's identity
remains intact. The internal andsocietal story that real work is

(05:29):
purely human can be maintained.When someone breaks that implied
social contract by being honestabout AI use, they're not just
making a disclosure about theirown work. They're forcing
everyone else to confront anuncomfortable question about
their own professional value.
And people really, really don'tlike being forced to confront

(05:50):
identity threats. This explainswhy the trust penalty happens
regardless of work quality. Thework could be objectively
excellent. It doesn't matter.The disclosure itself is the
problem because it triggersidentity protective responses in
observers.
Their brains shift from evaluatethe work mode to protect my

(06:11):
sense of self mode. And in thatmode, punishing the person who
triggered the threat by judgingthem as less trustworthy is a
way of protecting the boundariesof what counts as legitimate
professional work and, byextension, protecting your own
professional identity. Let'stalk about the concept of

(06:32):
legitimacy more directly becauseit's central to understanding
what's really happening here. Insociology and organizational
psychology legitimacy refers tothe perception that an action,
decision, or entity isappropriate, proper, or
desirable within a given socialsystem. It's not about whether

(06:52):
something actually works well orproduces good outcomes.
It's about whether it conformsto social expectations and
norms. The researchers foundthat AI disclosure reduces
perceived legitimacy. But whatdoes that actually mean? It
means that using AI violates anunspoken norm about how

(07:12):
professional work should bedone. This norm says real
professionals do their ownthinking.
Real expertise comes from humancognition. Authentic work is
only purely human. These aren'twritten rules. They're implicit
expectations that define groupboundaries. They tell us who

(07:32):
belongs in the professional ingroup and who doesn't.
And like all group boundaries,they're maintained through
social enforcement. Praise forthose who conform, punishment
for those who don't. Here'swhere it gets psychologically
interesting. Legitimacyjudgments are asymmetric.
Conforming to norms gains youonly a small amount of trust,

(07:55):
but violating norms loses you alot of trust.
This asymmetry creates a strongincentive to either genuinely
conform or at least appear togenuinely conform. When you
disclose AI use, you'reessentially admitting to norm
violation. Even if the norm isoutdated, even if it doesn't
make practical sense, even ifsecretly everyone else is

(08:19):
violating it too, the publicadmission marks you as someone
who doesn't belong to the ingroup of legitimate
professionals. This is boundarypolicing in action. It's not
about evaluating whether AImakes work better or worse.
It's about maintaining thesymbolic boundaries that define
professional identity. And thoseboundaries are defended

(08:41):
precisely because they're underthreat. If AI really can do
significant parts of knowledgework, then the boundary between
professional and notprofessional becomes unclear. So
people double down on enforcingthe rules about what counts as
legitimate professionalbehavior. The irony, of course,
is that this enforcementmechanism, punishing

(09:02):
transparency, incentivizes theexact opposite of what we claim
to value.
We say we value honesty andtransparency, but our actual
behavior reveals that we valuemaintaining comfortable identity
boundaries much more. Now let'stalk about what might be the
most psychologically interestingfinding from this whole line of

(09:24):
research. That the people whouse AI secretly are often the
harshest judges of people whodisclose their AI use. This
isn't unique to AI. This is awell established pattern in
psychology called moralcompensation or moral licensing.
When people privately violate amoral standard or a social norm,

(09:45):
they often become stricterenforcers of that standard
publicly. We sometimes see thisplay out in politics, for
example. It's a way of resolvingcognitive dissonance. Let me
take you back to anotherprevious episode, the one where
I talked about the psychology ofcaring behavior. I talked about
cognitive dissonance, which isthe uncomfortable psychological

(10:09):
tension you feel when yourbeliefs and your behaviors don't
align.
Festinger's research shows thatpeople tend to resolve this
dissonance not by changing theirbehavior to match their beliefs,
but by adjusting their beliefsor by doubling down on public
adherence to the norm thatthey're privately violating.

(10:29):
Secret AI users experiencecognitive dissonance. They
believe that real professionalsdon't use AI, or at least that
good work is purely human, butthey're using AI. This creates
psychological tension. One wayto resolve that tension is to
become even more vocallycritical of people who admit to
using AI.

(10:50):
See, I'm still one of the goodones. I'm upholding professional
standards. I'm judging thesepeople who admit it. It's
projection. It's self protectionthrough enforcement.
And it's entirely predictablefrom a psychological standpoint.
It also creates a vicious cycle.The more people secretly use AI,
the more cognitive dissonanceexists in the system. The more

(11:13):
dissonance exists, the harsherthe public judgment becomes
toward anyone who's honest. Theharsher the judgment, the
stronger the incentive to hideyour AI use.
And round and round it goes.What we end up with is a social
system where everyone knows thatAI use is widespread, but nobody
can talk about it honestlywithout facing social penalties.

(11:36):
It's a collective fiction thateveryone maintains because the
cost of breaking it the identitythreat, the legitimacy discount,
the social judgment simply feelstoo high. In my episode on moral
psychology, I distinguishedbetween intrinsic and extrinsic
morality. Intrinsic morality iswhen your behavior is driven by

(11:59):
internal values and principles.
Extrinsic morality is when yourbehavior is motivated by
external factors rewards,punishments, or social approval.
The AI transparency trap revealsa massive gap between stated
values and revealed preferences.We claim to value honesty. We
claim to value transparency. Weclaim that disclosure is the

(12:21):
ethical thing to do.
These are our stated values, theprinciples we say we hold. But
our revealed preferences, theactual behaviors we reward and
punish, are telling a differentstory. We punish transparency.
We reward secrecy. Or at leastwe don't punish it.
Our actual behavior reveals thatwe value identity protection and

(12:42):
social conformity more than wevalue honesty. And this isn't
about individual hypocrisy. Thisis about a collective
coordination failure. In theory,everyone would benefit if
transparency about AI use becamethe default. We could have
honest conversations about bestpractices.
We could develop betterpolicies. We could learn from

(13:04):
each other's experiences andfactor consent into AI practice.
But in real practice, the firstperson to be transparent pays a
massive social cost whileeveryone else maintains
plausible deniability. It's aclassic collective action
problem. The individuallyrational choice, stay silent,

(13:24):
produces a collectivelysuboptimal outcome, a culture of
secrecy and deception.
And because the penalty fortransparency is rooted in
identity threat rather thanrational evaluation, you can't
simply reason your way out ofit. You can't convince people to
stop penalizing disclosure byexplaining that it's rational.

(13:45):
Identity protection is notrational. It's emotional,
automatic, and deeply ingrained.This is why the researchers
found that even when they triedto create collective validity,
priming participants to believethat AI use is common and
accepted, the trust penalty wasreduced but not eliminated.
You can't completely overridethe identity protective response

(14:07):
through framing alone. Let'szoom out and think about what
this tells us about howprofessional identities are
constructed and maintained.Professional identity isn't
primarily about what youactually do. It's about the
stories we tell about what makessomeone a real professional. For

(14:28):
knowledge workers, these storieshave traditionally centered on
human cognitive labor, thinking,analyzing, creating, and problem
solving.
The value of a professional wasdirectly tied to their human
intellectual capacity. AIdisrupts that story in a
fundamental way. If AI canperform tasks that were

(14:48):
previously markers ofprofessional expertise, then
what does professional identityrest on? What makes someone
valuable? Rather than grapplewith that question directly,
which would requirereconstructing professional
identity from the ground up,it's psychologically easier to
simply declare all AI useillegitimate.

(15:10):
It's easier to maintain thefiction that real work is purely
human and enforce that boundarythrough social punishment. This
is what sociologist MichelLamont calls symbolic boundary
work, the way groups definethemselves through moral and
cultural distinctions.Professional groups maintain
their status and identity bydrawing boundaries around what

(15:33):
counts as legitimateprofessional behavior. Those
boundaries shift over time, butthey're always defended
vigorously when they're underthreat. What we're seeing with
AI is a moment of acute boundarythreat.
The traditional markers ofprofessional identity are
rapidly becoming less clear, Sothe boundary enforcement becomes

(15:54):
more aggressive. The penaltiesfor violation become steeper,
and the social policing becomesmore intense. But here's the
thing. This isn't necessarilyconscious or deliberate. People
aren't sitting around thinking,I need to protect my
professional identity bypunishing people who disclose AI
use.
It's automatic. It's emotional.It feels like a genuine

(16:16):
evaluation of trustworthiness,even though it's actually an
identity protective response.And this is what makes it so
hard to change. You're notfighting against conscious bias
or explicit prejudice.
You're fighting againstautomatic psychological
processes that evolved toprotect group identity and
maintain social cohesion. Let'stalk about incentives because

(16:42):
understanding the psychology ofincentive structure is vital to
understanding why this patternpersists. From an individual
perspective, hiding AI use isrational. The social cost of
disclosure is real andsignificant, lower trust,
perceived illegitimacy,potential professional
consequences. We've beendiscussing that.

(17:03):
The benefit of disclosure iswhat exactly? Feeling ethically
pure? That's a pretty weakbenefit compared to the concrete
social costs. So individualsrationally choose secrecy. But
when everyone makes thatindividually rational choice, we
end up with a collectivelyirrational outcome, creating a

(17:25):
culture where deception isnormalized, where no one can
talk honestly about their actualpractices, where learning and
improvement are hampered becauseeveryone's pretending they're
doing things one way whenthey're actually doing them
another way.
This is a coordination failure.It's a situation where
individual incentives andcollective welfare are

(17:45):
misaligned. Game theorists wouldrecognize this as a version of
the prisoner's dilemma or acoordination game with multiple
equilibria. Here's what makes iteven trickier. Penalty for
transparency is front loaded andcertain, while the potential
benefits of widespreadtransparency are diffuse and

(18:06):
uncertain.
If you disclose AI use, you faceimmediate trust penalties. Maybe
if enough other people alsodisclose, eventually the normal
shift and transparency willbecome accepted, but that is a
maybe. It's in the future, andpsychologically, you're bearing
the cost right now. This createsa severe first mover

(18:26):
disadvantage. The people who aremost ethical, who actually
follow their stated values abouttransparency, get punished the
most.
The people who are leastethical, hiding AI use while
publicly judging others, face noconsequences, and may even be
rewarded with higher trustratings. When the incentive
structure punishes virtue andrewards vice, you can't fix the

(18:49):
problem by appealing toindividual ethics. You need to
change the incentive structureitself. But how do you do that
when the penalties are rooted inautomatic identity protective
responses, rather than consciouschoice? That's the trap.
That's why this is sopsychologically sticky. So where

(19:10):
does this leave us? I want to bevery clear. I am not advocating
for dishonesty. I'm not sayingit's okay to hide AI use because
the incentives favor it.
What I am saying is thatunderstanding the psychological
mechanisms at play is crucialfor figuring out how to create

(19:30):
better systems. But before Italk about individual
strategies, I need to addresssomething systemic that's making
this whole problem so muchworse. And that is the way
generative and LLM AI is beingforced on people. A significant
part of this crisis stems fromhow AI is being deployed, driven

(19:52):
by tech company and venturecapital greed, pushed by hype
cycles rather than genuine needor consent. AI is being embedded
everywhere, often without peoplehaving any real choice about
whether they want to use it ornot.
Companies are racing to slap AIpowered on everything they make,

(20:13):
not because it improves theexperience, but because
investors and markets aredemanding it. This forced
adoption exacerbates theidentity threat we've been
talking about. It's not just AImight replace me. It's I don't
even get to choose whether orhow I engage with this
technology. That loss of agencyintensifies the psychological

(20:35):
resistance and makes thelegitimacy crisis so much worse.
People aren't just resisting AIitself. They're resisting having
their autonomy stripped away,being treated as passive
recipients of whatever technicalchange companies decide to
impose. That's a legitimate formof resistance, and it deserves

(20:56):
respect. A better approach wouldbe to make AI opt in by design.
Let people choose whether andhow they use AI tools.
Provide informed consent.Position AI as a specific tool
to augment human effort incontexts where people find it
genuinely useful, not as amandatory overlay on everything

(21:21):
or as a replacement for humancapability. When people have
agency over their AI use, whenthey can decide, yes, I want to
use this tool for this specifictask because it helps me
accomplish my goals, disclosurebecomes less threatening. It's
no longer admitting to somethingthat was forced on you or that

(21:43):
undermines your value. It'smaking an intentional choice
about tools, which is somethingprofessionals have always done.
This addresses both the autonomyproblem and helps normalize
transparent, intentional AI use.It turns AI from an identity
threat into a legitimateprofessional tool chosen and

(22:03):
deployed thoughtfully. Nowbeyond that systemic issue, if
we want to move toward a culturewhere transparency about AI use
is possible without penalty, weneed to address the identity
threat that drives thelegitimacy discount. That's a
much harder problem than simplyimplementing disclosure

(22:24):
policies. Here are some thingsthat might actually help at the
cultural and institutionallevel.
First, we need to activelyreconstruct professional
identity in ways that allowprofessionals to incorporate
rather than exclude AI use. Andthis means telling new stories
about what makes someone avaluable professional. Maybe

(22:45):
it's not about raw cognitivehorsepower anymore. Maybe it's
about judgment, discernment,knowing when to use which tools,
how to evaluate outputs, how tofact check, how to integrate AI
assistance with human insight.That's still a skilled valuable
role.
It's just a different role thanwe're used to. Second, we need

(23:07):
institutional leaders to modeltransparency about their own AI
use. When high statusindividuals disclose AI use
without apology, it helps shiftthe norms, but this requires
people with secure positions totake social risks on behalf of
the broader community. That'shard. It requires courage and a

(23:27):
willingness to absorb short termlegitimacy costs for long term
cultural change.
Third, we need to make AIdisclosure mandatory across the
board. If everyone is requiredto disclose, then disclosure
loses its signaling value. It'sno longer marking you as
different from the in group ifeveryone has to do it. The

(23:49):
researchers found that mandatorydisclosure reduced but didn't
eliminate the trust penalty. Butit's better than the current
situation where voluntarydisclosure is essentially self
punishment.
Fourth, we need to have honestconversations about what AI
actually can and can't do well.A lot of the anxiety around AI

(24:09):
comes from uncertainty and worstcase thinking, as well as a
little bit of worst practicesfrom some of the people who have
trained AI models. If we cancreate more realistic
understanding of AI's actualcapabilities, what it genuinely
helps with and where humanjudgment remains essential, it
might reduce the identitythreat. People are less

(24:32):
threatened when they understandthat AI is a tool that augments
rather than replaces humancapabilities. But I want to be
realistic.
These are hard, slow changes. Inthe meantime, individuals face
real dilemmas. Should youdisclose AI use and face the
trust penalty? Should you staysilent and maintain legitimacy?

(24:52):
There's no easy answer, and itdepends on your specific
situation, your risk tolerance,and your institutional context.

What I can say is this: Understanding the psychological (25:00):
undefined
dynamics doesn't resolve theethical dilemma, but it does
help you make more informedchoices. You can go into
decisions with your eyes openabout the trade offs rather than
being blindsided by unexpectedsocial penalties. And when
possible, you can push backagainst forced AI adoption and

(25:22):
advocate for systems that givepeople real agency and choice.
Let me close by zooming out onemore time because this isn't
just about AI. The dynamicswe've explored today show up in
countless other contexts.
Anytime a new tool, practice, ortechnology threatens established

(25:42):
professional identities, you seesimilar patterns. Photographers
fought against digitalphotography. Graphic designers
resisted computer aided design.Musicians pushed back against
electronic instruments. Thepattern repeats.
Identity threat leads tolegitimacy policing, which leads
to social penalties for earlyadopters who are honest about

(26:03):
using new tools. Eventually, thenorms shift. Things like digital
photography are now standard.Computer aided design is
universal, and electronic musicis an entire respected genre.
But that shift takes time, andthe people who are most honest
during the transition periodoften pay social costs for their

(26:24):
honesty.
What this really reveals is howpowerfully identity shapes our
cognition. We like to think ofourselves as rational evaluators
who assess tools and practicesbased on their outcomes. But
when identity is at stake,rationality goes out the window.
We assess things based onwhether they confirm or threaten
our sense of who we are andwhere we belong. We also see how

(26:48):
social norms can create perverseincentives that persist even
when everyone knows they'recounterproductive.
The AI transparency trap isn'tmaintained because anyone thinks
it's a good system. It'smaintained because changing it
would require coordinatedcollective action, and the
individual costs of being anearly defector are too high. And

(27:10):
finally, we see the gap betweenstated values and revealed
preferences. We claim to valuetransparency, but we punish it.
We claim to value honesty, butwe reward strategic silence.
Understanding that gap, reallyunderstanding it, not just
intellectually acknowledging it,is critical for making sense of
human social behavior. Thesepatterns aren't bugs in human

(27:33):
psychology. They're features.They evolve to help us maintain
group cohesion, protect valuableidentities, and coordinate
social behavior. But in the faceof rapidly changing technology,
these same features can createtraps that are hard to escape.
Recognizing the trap is thefirst step. Figuring out how to
escape it individually andcollectively is the harder work

(27:56):
that lies ahead. If this episodemade you think a little
differently about AItransparency, professional
identity, or the gap betweenwhat we claim to value and what
we actually reward, I'd love tohear about it. As always, you
can find show notes, references,and the transcript at
PsyberSpace. If you enjoyed thisepisode, please share it with

(28:20):
someone who might benefit fromunderstanding these
psychological dynamics.
And if you want to dive deeperinto related topics, check out
our previous episodes onidentity protective cognition,
moral psychology, and cognitivedissonance. They all connect to
the themes we explored today.Thanks for joining me on
PsyberSpace. I'm your host,Leslie Posten, signing off and

(28:42):
reminding you, stay curious. Anddon't forget to subscribe so you
never miss an episode.
Advertise With Us

Popular Podcasts

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

Are You A Charlotte?

Are You A Charlotte?

In 1997, actress Kristin Davis’ life was forever changed when she took on the role of Charlotte York in Sex and the City. As we watched Carrie, Samantha, Miranda and Charlotte navigate relationships in NYC, the show helped push once unacceptable conversation topics out of the shadows and altered the narrative around women and sex. We all saw ourselves in them as they searched for fulfillment in life, sex and friendships. Now, Kristin Davis wants to connect with you, the fans, and share untold stories and all the behind the scenes. Together, with Kristin and special guests, what will begin with Sex and the City will evolve into talks about themes that are still so relevant today. "Are you a Charlotte?" is much more than just rewatching this beloved show, it brings the past and the present together as we talk with heart, humor and of course some optimism.

Stuff You Should Know

Stuff You Should Know

If you've ever wanted to know about champagne, satanism, the Stonewall Uprising, chaos theory, LSD, El Nino, true crime and Rosa Parks, then look no further. Josh and Chuck have you covered.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.