All Episodes

June 10, 2025 24 mins

Tim takes us behind the social media curtain to reveal uncomfortable truths about what's really happening in our feeds. Having operated in the gray areas of Instagram growth, he shares startling insights: up to 40% of social media traffic might be fake, strategically engineered through automated engagement tactics designed to trigger reciprocation. This isn't speculation – it's based on firsthand experience running services that manipulated algorithms for follower growth.

The conversation ventures into the increasingly credible "Dead Internet Theory" – the notion that most online content isn't created by humans anymore but by AI and bots. What makes this particularly fascinating is how quickly we've moved from theory to reality. When first proposed, AI-generated content wasn't widely available. Today, distinguishing between human and AI-created text or images has become nearly impossible for the average user. Tim shares a personal encounter with a Reddit karma-farming bot that offered plausible but nonsensical responses to his research questions, highlighting how pervasive this issue has become.

For business owners, these revelations create strategic dilemmas. The more businesses rely on AI for content creation, the more they contribute to the very problem they might criticize. Meanwhile, shadow banning – when platforms limit content reach without notification – presents another layer of risk. Tim traces this practice back to early internet forums, explaining how platforms use it to avoid the support overload that comes with explicit bans. His practical advice? Build an email list as insurance against unpredictable platform changes. Unlike social followers, your email subscribers remain accessible regardless of algorithm shifts or account restrictions.

Ready to see social media through new eyes? Grab Tim's book "Framed: A Villain's Perspective on Social Media" for an even deeper dive into these topics from someone who's operated on both sides of the digital divide.

Connect with Tim:

Tim O'Hearn

LinkedIn

Send me a text if you loved this episode!

Rate, Review, & Follow on Apple Podcasts

Your feedback helps me reach more solopreneurs like you.

It’s super easy—just click here, scroll to the bottom, tap those five stars, and hit “Write a Review.” I’d love to know what resonated most with you in this episode!

And don’t forget to hit that follow button if you haven’t already! There’s plenty more coming your way—practical tips, inspiring stories, and tools to help you grow a business that makes a real difference. You won’t want to miss out!

Let's Connect on Instagram
yeslab.ca
Search your favorite episodes HERE

This podcast is produced, mixed, and edited by Cardinal Studio. For more
For information about how to start your podcast, please visit www.cardinalstudio.co
Or e-mail mike@cardinalstudio.co

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Tim (00:00):
Because, when I was a kid, safety on the internet meant
something totally different thanwhat it means today.

Alyssa (00:07):
Welcome to Brilliant Ideas, the podcast that takes
you behind the scenes of some ofthe most inspiring digital
products created by solopreneursjust like you.
I'm your host, alyssa, adigital product strategist who
helps subject matter expertsgrow their business with online
courses, memberships, coachingprograms and eBooks.
If you're a solopreneur withdreams of packaging your
expertise into a profitabledigital product, then this is

(00:29):
the podcast for you.
Expect honest conversations ofhow they started, the obstacles
they overcame, lessons learnedthe hard way and who faced the
same fears, doubts andchallenges you're experiencing,
from unexpected surprises tobreakthrough moments and
everything in between.
Tune in, get inspired and let'sspark your next big, brilliant
idea.
Welcome back to the BrilliantIdeas Podcast and if you're new

(00:50):
here, I'm Alyssa, and this isthe podcast where we dive into
the real stories behind bigideas, smart strategies and the
things no one tells you aboutwhen you're building a digital
product business.
Today, we're talking aboutsomething we all use every day
social media.
We're diving into the hiddenside, the part that most people

(01:10):
don't see or don't want you tosee.
My guest today is Tim.
He's an author of the bookFramed A Villain's Perspective
on Social Media.
If you've ever questionedwhat's really going on behind
your feet, or if you just wantto know why people are still
buying fake followers, thisepisode is for you.
Let's get into it.
Welcome to the show, tim.

Tim (01:28):
Thanks so much for coming on today.

Alyssa (01:29):
Thanks, alyssa.
Yeah, and I'm really excitedabout this topic because it's
one of those things where wethink of social media as people
posting, engaging and sharingcontent.
But the more that we dig, themore you realize that there's
this whole unseen layer shapingwhat we see, what goes viral and
even how we interact, and soit's not really about
influencers, ads or algorithms.

(01:49):
I think there's a whole otherconversation about that but I
think it's also about how muchof what we see online is
actually real versus how much iscarefully engineered behind the
scenes, and so my question foryou is how much of what we see
on social media is fake and real?
You can kind of break that down.

Tim (02:08):
Sure, I think it's really cool to start off with a
question like that, because itrequires challenging a lot of
what we see and believe everyday.
Regardless of somebody'sexperience with social media,
they're probably on it and theyprobably have usage patterns and
opinions that are quite welldeveloped.
You know for my friends, youknow for my friends or for me,
we've been on social media for15 years or more, so we're

(02:30):
really ingrained.
We take a lot of what we see atface value perhaps, but the
question remains what is realand what is kind of real and
what is maybe, you know,algorithmically augmented?
So when I put everythingtogether to write my book, I was

(02:51):
thinking about what thebehavior was that I was
partaking in, which wasalgorithmic fake engagement on
Instagram.
So what we were doing wasgetting people more followers by
logging into their accounts andthen shooting across DMs,
comments, likes, with the ideathat they would be reciprocated

(03:12):
and the rate of return or therate of reciprocation was right
about 10% or 15%.
So the idea was, if you send100 follows per day, you might
expect to get 10% of those back,or 10 follows.
If you scale that up in anautomated fashion across days,
weeks and months, that couldactually be a pretty decent

(03:33):
follower growth for a newaccount for someone who's trying
to get to their first 1K orfirst 2K followers, especially
on Instagram.
So really we had to extrapolatethat and think, if we're doing
this and we're offering thisservice and we're one of the
smaller players, how muchoverall might be fake on these
platforms?
And the numbers that we got toby the late you know, we'll say

(03:55):
the late 2010s, about 2018 or2019 was as high as 40% of all
traffic might've actually beenfake.
It's just very hard for us tosee that without being insiders
at a company like.
Meta.

Alyssa (04:09):
Wow, that's kind of unsettling to think about that,
right, Like how much of what wesee online.
We don't really know that,those numbers, because they're
not shown to us, whether it'sbot engagement or fake followers
.
But that actually ties intosomething bigger which you know
a lot about, which is this deadinternet theory.
Now, I know some of it.

(04:30):
I know a little bit about thisFor my listeners who haven't
heard of it.
The dead internet theory isthis idea that most of the
content that we see online isn'tactually created by real people
anymore, but generated by AIand bots.
But what's interesting is, whenyou look at how much
AI-generated content is alreadyout there, it makes you wonder,
like how much is this theoryactually true?

(04:53):
Like, I'm curious, like, whatis your take on this?
Like, is the dead internettheory real or is it just some
other internet myth?
Because I don't know.
I kind of think it's true.

Tim (05:05):
Yeah, more and more people are coming around to say maybe
it's less of a theory and it'smore being proven.
It's just to what extent is ittrue, rather than is this a myth
?
What I find funny is when itwas first proposed, ai generated
content was not commerciallyavailable, was not publicly
available.
So when I first started writingessays, thinking about you know
what is the future of theinternet, what is this?

(05:27):
I called it late stageInstagram, like what's going to
happen once we become so profitdriven that we might have bots
running the show.
This was before we couldconceptualize how much content
could be created by a computer,so that was maybe four years ago
.
Today it's totally different,where most pictures in your feed
could be fake and the majorityof text you see on Reddit.

(05:51):
You probably wouldn't be ableto tell real versus fake.
So that's been a reallyconcerning topic that I've had
to address head on, even while Iwas doing research for my book.
So I published my book Framed AVillain's Perspective on Social
Media in late February and whileI was doing some of the very,
very last research for the book,I reached out to people on

(06:11):
Reddit and I said, hey, there'sthis part of the early internet,
it's not really well capturedanywhere.
Do you remember what happenedon Myspace in 2007?
Do you remember some of theseexperiences?
One of the responses I was metwith was actually provided by a
bot and specifically it was akarma-farbing bot that was going

(06:32):
to newly posted threads thatmight have had the potential to
go viral and it was postingplausible but ultimately
nonsensical responses.
And it was only obvious therebecause we were talking about
MySpace in 2006 and 2007, whicha lot of the AI had been poorly
trained on.

(06:53):
So when we think about the deadInternet theory and that, like
now, I had a personal experiencewith it.
You've probably had personalexperiences with saying, like,
is everybody just a shill now?
Like, is any of this real?
Of course, we are not going toget to a point where everyone on
the internet, like you're realand I'm real.
I think we can be prettyconfident in that.

(07:14):
And of course, you have what Icall the intimate network, which
is people who you interact witha lot because you also know
them in real life or you'veyou've gotten some other proof
that they're, that the identitymatches to a human being.
But as we go further andfurther and we get more people
on platforms such as Reddit, Ithink the chances of this being
true are much higher and I wouldsay the capacity for feeds to

(07:38):
be manipulated by fake contentare.
You know, it's a lot larger.

Alyssa (07:43):
Yeah, and so what do we do with that information?
Like, how do we, you know, forbusinesses who are on there
selling their services, theirproducts and things like that?
Like, how, where do we go fromhere, does it?
Because if what we're sellingand we're posting on Instagram
it's not it's, is it like we'renot getting as much engagement?
It could become a real problem,right?

Tim (08:08):
it could become a real problem, right?
Yeah, for sure.
And it's a balancing act,especially for businesses,
because now we see a lot ofbusinesses are using AI to write
their headlines and they'reusing AI to write their call to
action, and they're oh, you knowwhat.
I'll use AI to publish thispicture because I don't really
have original content and youhave this weird state where,

(08:28):
even though there is a humanmaking the decisions, it could
just as easily be a bot makingthe decision to use AI for
advertising.
So, as a human who's using AIto generate headlines, you're
one step closer to the deadinternet theory, to making it
true, or at least to creatingthis state where a new person on
the internet will see yourcontent.

(08:50):
I mean, imagine somebody logsinto Facebook now or logs into
Reddit now for the very firsttime.
It's plausible that a lot ofwhat they see is at least AI
augmented, meaning somebody usedAI to help them write, to help
them with conversions or to helpthem just with the content in
general.
So I think what a businessowner can do is ideally not use

(09:10):
AI at all, which is gettingharder and harder.
But as far as enforcement ittakes, I think large scale
agreement that we don't wantbots doing certain things and it
has to be this agreementbetween the platforms and, of
course, they have their owndecisions to make regarding that
.
And then also platform users.
So if you're a user on Facebookand you've worked really, the
platforms and of course, theyhave their own decisions to make

(09:31):
regarding that.
And then also platform users.
So if you're a user on Facebookand you've worked really hard
to get, say, like 5,000 fans,you're going to be annoyed if
someone is quickly gatheringfans because they're using a bot
.
So you would naturally say, hey, I support enforcement of bots
and people using AI-generatedcontent.
It gets much more difficultwhen the platform has to decide
well, if I ban everyone who'sever used AI and I restrict them

(09:54):
from using this, where are myusers?
Where are my customers?
The platform has this toughbalancing act where, if they
enforce things too you know toomuch, they might wrongly flag
your content as being AIgenerated, which is really
really tough.
So I think the ultimateenforcement action is probably
to say, if we are in agreement,that bots are bad, and

(10:16):
especially bots using AI to seemhuman are bad, the only
solution is taking verificationto the extreme.
So things like we saw withTwitter in 2009,.
When they introduced the bluecheckmark, it was actually an
experiment.
It wasn't like oh hey, this isgoing to be the main mark of
authority on the Internet forthe rest of history.

(10:37):
That's what it became, but thatwasn't their intention.
In fact, it was only to preventimpersonation for individuals.
Businesses couldn't get theblue checkmark at all in 2009.
It was not until later.
So we have to take that to theextreme where we think now blue
check marks are much morecommoditized and anyone on

(10:57):
platforms I've even submitted myID, but when I log in, do we
know that it's actually me?
Verification, which is everytime you log in or at some
regular interval, you would beprompted for either a

(11:18):
fingerprint or a face scan orsomething to ensure that it's
actually you using your account.
This is extremely expensive toimplement.
Nobody wants to be the firstone, but the example I gave
recently on a podcast is thatthere are Uber drivers using
other people's identities togive rides.
So if there's one place wherewe should probably start

(11:38):
trialing this, it's probablyRideshare, and then, beyond
Rideshare, it's probablydelivery apps, because in New
York there's a huge black marketfor Grubhub accounts, and then,
of course, that trickles downto well.
Is there a black market forReddit accounts, twitter
accounts, linkedin accounts?
The answer is yes, but weshould probably start where
there's actually human safetyconcerns.

Alyssa (11:59):
Wow, yeah, and also it's also.
I think it's up to theplatforms to start talking about
this and like enforce that.
You know that businesses shouldnot be using AI to create their
content for them, like, but itjust seems like that's where
everything's going.
Like, if you're not using AI insome capacity in your business,

(12:20):
then you're kind of likeoutcasted in this deserted
island.
Like when I had said that youknow, when you create a course,
for example, you don't want yourAI to be creating the course
for you.
It could help you, assist you,but ultimately you're the expert
here.
Like you have to be the one tocreate the course outline and to
think about all those things.

(12:41):
Like a bot is helpful, but youknow, I just find, like the
we're losing that kind ofpersonal touch, you know, and so
things are just like um, itjust seems like we're going in
that direction and it's reallyhard to see.
Like now can I create?
And a lot of people I find a lotof business owners are now

(13:03):
relying on it so much.
Like bots have become theirbusiness, like their assistant,
their content creator, or liketheir their, where they get
their ideas.
Like I had a client who said tome recently.
She's like I feel stupid now,like I feel like I can't think
anymore for myself because Ijust asked ChachiBT to help me,

(13:24):
like, with my ideas.
And I just feel like I rely onAI so much for my business to
create my ideas and my contentthat, like I can't think for
myself anymore and I just it'scrazy that we have so quickly
relied on bots to do everythingfor us.
And you know, I'm kind of goingdown a rabbit hole here, but I'm

(13:47):
curious about the content that,yes, okay, the content we see
on Instagram great, but whatabout the content that we don't
see?
So, like I'm referring tothings like shadow banning here,
like there's a lot of mixedinformation online to say that
it doesn't exist.
And then others who experiencelike a shadow ban themselves,
where they're like going on orthey're saying like I have been

(14:08):
shadow banned and they'll say,like some context around that,
but it makes you question justto think of it.
Is it real or is it not?
And I want to ask you what isexactly a shadow ban?
What is it?
And then, is it a real thing oris it just something that
people claim when theirengagement drops and just to
draw more attention to theiraccounts?

Tim (14:28):
Yeah, there's definitely a spectrum there and I think it's
a really important point nowbecause the concept of a shadow
ban is being used in perhapsdangerous ways.
So I would expand it andaddress the spectrum by saying
we're also talking aboutalgorithmic interference, which
is saying we have a certain setof beliefs about how a feed

(14:49):
algorithm will post and shareand expand the reach of our
content, and a lot of timesthere can be manual overrides
that are completely contrary towhat a user expects, that
seriously inhibit reach, and alot of that is what people now
call a shadow ban.
In my book I actually foundthat the best historical

(15:10):
research for the concept ofshadow banning is Urban
Dictionary and it soundsridiculous but this, this
website that's been around sincethe dawn of time basically
nothing ever gets deleted.
So you can see how shadowbanning was first suggested in
like 2007 or 2008 about likeonline forums, so things that,

(15:31):
like, gamers were using, and theexamples that were given were
very early internet gamer driven.
So they're saying using and theexamples that were given were
very early internet gamer driven.
So they're saying, oh, this guywas flaming.
We don't even know what theterm flaming means anymore.
It means that you were startinga flame war, which was a type
of trolling which we now say istoxic, or there's other terms we
use today, but this is anancient term as far as internet

(15:53):
history is concerned.
Then you see more definitionspopping up where they're talking
about Twitter, and then thepolitical aspect, which has been
really relevant in the UnitedStates.
They start saying, hey, itmight be one side of the
spectrum versus the other andthe platform is aligned in one
way.
It's silencing voices on theother side and we see that morph
out where more recent examplesdo concern platforms like Meta.

(16:19):
So what is a shadow ban?
It is essentially a ban or aneffective ban, in that your
reach is zero, or like zero,while the user has no obvious
indication that the reach hasbeen restricted.
Why would they even botherdoing this?
It's a control system that auser has to uncover on their own

(16:43):
because they're not getting asmuch reach or as much of what
they would expect.
The difference is that if youlog into your phone on whatever
app and you see a ban message,you're going to freak out.
It's going to be like a very,very chaotic, stressful
situation.
I've suffered so many bans I'velost count because I was

(17:03):
breaking the rules for yearsindustriously.
But for other people, if it'stheir business, they only have
one account or two accounts.
Losing that account access isreally, really awful.
So if you look on Reddit andyou look at r slash Instagram,
the entire feed is peoplecomplaining about their accounts

(17:23):
having been banned.
Because it's emotional.
If your life is tied to youraccount, it's emotional.
So I propose that shadowbanning is the response to the
insane support load that'sgenerated from actual bans and
it's saying, hey, we don'tactually have to ban this person
.
Let's just make them thinkthey're still a part of the
conversation.
But they're not.
And when they realize this, ina couple of weeks or a month

(17:47):
maybe we'll lift the ban.
It could be used as a temporaryban, but also it won't
immediately cue the user intoknowing that they've been
restricted, and I think that'sthe real usefulness of it for
these platforms and why it'sproliferated.

Alyssa (18:01):
That's really stressful for a small business owner.

Tim (18:03):
If that happens, yeah Right , it's everything.

Alyssa (18:06):
Yeah, I mean, I feel like if that happened to me, I
would be like I mean, I don'thave, you know, thousands of
followers, but I'm stillemotionally tied to my Instagram
account.
That would be awful if thathappened to me.
And the thing is, too, is thatthere's no explanation.
I find that, if it does happen,there should be a long

(18:27):
explanation as to what you brokeand why, and then how do you
get it back, and that all comes,you know, from customer support
and all those things.
But I think, beyond that, it'ssomething to think about the
types of content that we'recreating for our businesses and
what we can and cannot say andwhat other things that we should

(18:51):
.
You know what is appropriate,what is not appropriate, you
know.
And so I feel like this tiesreally nicely into something
that I always like to end withon my podcast.
This is called the brilliantbites of the week, and this was
created because I wanted mylisteners to feel like okay, you
know, if I were to be listeningto this episode, what could I

(19:14):
end Like?
What could I do something rightnow for my account, or what
could I?
A strategy that they couldimplement right away, that you
know is helpful and could givethem maybe a different
perspective on social media.
So what kind of what advice orinsight or something that you

(19:37):
can leave or take away that they, that they can use right away?
That would be helpful.

Tim (19:43):
Sure, and I think it's really close to what we're
addressing with shadow bans.
There's so much to say on thisthat it's actually two chapters
in my book and it's namedSafeguarding and Shadow Bans
Part 1 and Part 2.
And it talks about the movingof the goalposts as far as what
is appropriate and what internetsafety actually means, because

(20:06):
when I was a kid, safety on theinternet meant something totally
different than what it meanstoday and what's appropriate to
say on a platform.
You know the conversationspeople are having, the words
that you can and can't use aretotally different.
Anybody looking for that adviceor that you know kind of the

(20:27):
way to avoid a lot of this is tothink about how to construct an
audience in a way that's kindof, I would say, organic or even
like agnostic of the platforms.
So if your account gets banned,if there's some enforcement
action, you don't own yourfollowers forever.
You can't just export them andtake them somewhere else.

(20:49):
If you haven't already exposedthem to your other channels,
you're out of luck.
Maybe some of them will findyou, maybe they follow you on
YouTube or your website.
But my advice is to understandthis or to maybe avoid
participating in the crazymoving of the goalposts is to
start a mailing list, and I knowit's such simple advice, but
it's like if you lose yourInstagram, your TikTok I still

(21:11):
get people messaging me aboutlosing TikToks.
I never even operated on TikTok, but it's such a huge business
segment for certain people thatI say you could have mitigated
this if you were funnelingpeople to a mailing list,
because in that case, you do ownit and it's however you
interface with email.
No single platform can controlit.
So I think that's a reallyinteresting thing for those who

(21:33):
maybe don't have as much of theydon't care as much about going
deep into the nature of whysocial media is what it is, and
it'll actually help them funnelthe right customers to the right
channels, depending on whatthey're posting and what they're
selling.

Alyssa (21:47):
Yeah, I agree with that, Email lists all the way,
because you just don't know,especially when it goes down and
everyone freaks out, you'rewondering like, yeah, I should
have built an email list.
So that's great advice, and Ithink we've only scratched the
surface of how much social mediais really shaping what we see,
believe, engage with for hours.

(22:08):
So I just want to thank you forpulling back the curtain on
this and giving us a deeper lookat the side of the internet
that you know most people do nottalk about, and you know I've
had other guests on here who'vetalked about social media, but
more in the way of likeinfluencing, growing your social
media content and like likeshares and all those things, and
so you give a really differentperspective, which is really
good.
Now, before we wrap up, I knowthat you dive even deeper into

(22:29):
all of this in your book Frameda villain's perspective on
social media.
Can you share a little bitabout what readers can expect
from that?

Tim (22:37):
Sure, framed is a really exciting book because I only
created it because the existingbooks were not interesting to me
.
I would read them and I wouldsay, yeah, there are some good
points here, but this is writtenby somebody who is too far away
from what's actually going on.
So it was never an insider.
It was usually some wealthy,older person who had gone

(22:57):
through tech or gone through thejournalism space, and they
cited all their sources, butnobody was coming out and saying
here's what I did.
As a villain in the space, youget just as close to an insider
as you can because you'reknocking against the wall that
they created.
You know you're the onebreaking the rules and you're
the one you know it's reallythis push and pull versus the

(23:18):
business itself.
So what Framed is is a memoir,equal parts, a technological
essay collection, and it'sreally a book of interest for
anyone who is on social media,whether as a user or a business
owner who's looking to get justa little bit deeper.
So it definitely goes deeperthan you know, than this
conversation.
I think we just scratched thesurface, but Frame Now is it's

(23:40):
out in paperback.
It's about 440 pages of purelyoriginal content and I think
it'll have a big impact on howwe look at social media going
forward.

Alyssa (23:49):
And it's also in the show notes of where you can grab
a copy as well.
So I just want to thank you,Tim, for being a guest on today.

Tim (23:55):
Yeah, thanks, Alyssa.

Alyssa (23:56):
No worries.
So for everyone listening, ifyou've got thoughts on this one,
send me a DM on Instagram.
I'd love to hear your opinionon this and hanging out with us,
and I'll catch you next time onanother brilliant idea.
Thanks for tuning into thisepisode of brilliant ideas.
If you love the show, be sureto leave a review and follow me
on Instagram for even moreinsider tips and inspiration.
Ready to bring your next big,brilliant idea to life?

(24:17):
Visit AlyssaVelsercom forresources, guidance and
everything you need to startcreating something amazing.
Advertise With Us

Popular Podcasts

On Purpose with Jay Shetty

On Purpose with Jay Shetty

I’m Jay Shetty host of On Purpose the worlds #1 Mental Health podcast and I’m so grateful you found us. I started this podcast 5 years ago to invite you into conversations and workshops that are designed to help make you happier, healthier and more healed. I believe that when you (yes you) feel seen, heard and understood you’re able to deal with relationship struggles, work challenges and life’s ups and downs with more ease and grace. I interview experts, celebrities, thought leaders and athletes so that we can grow our mindset, build better habits and uncover a side of them we’ve never seen before. New episodes every Monday and Friday. Your support means the world to me and I don’t take it for granted — click the follow button and leave a review to help us spread the love with On Purpose. I can’t wait for you to listen to your first or 500th episode!

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

Stuff You Should Know

Stuff You Should Know

If you've ever wanted to know about champagne, satanism, the Stonewall Uprising, chaos theory, LSD, El Nino, true crime and Rosa Parks, then look no further. Josh and Chuck have you covered.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.