All Episodes

November 24, 2025 46 mins

Matthew Bertram and Jon Gillham unpack how AI content, plagiarism risk, and Google’s crackdowns reshape SEO, then lay out guardrails that protect rankings while building real LLM visibility. The focus stays on practical governance, provenance checks, entity health, and adding value beyond words.

• Rebrand context and why AI integrity matters
• Study showing AI overviews citing AI content
• Risks of synthetic data and value of human signals
• School and workplace guardrails for AI use
• Google’s stance on helpful content vs scaled abuse
• Penalty patterns, core updates, and indexing lags
• Plagiarism trends, fair use thresholds, and QA checks
• LLM visibility strategy and entity consolidation
• Editorial workflows to detect copy‑paste AI
• Actionable playbook for responsible AI adoption

Guest Contact Information: 

Website: originality.ai

LinkedIn: linkedin.com/in/jon-gillham

More from EWR and Matthew:

Leave us a review wherever you listen: Spotify, Apple Podcasts, or Amazon Podcast

Free SEO Consultation: www.ewrdigital.com/discovery-call

With over 5 million downloads, The Best SEO Podcast has been the go-to show for digital marketers, business owners, and entrepreneurs wanting real-world strategies to grow online. 

Now, host Matthew Bertram — creator of LLM Visibility™ and the LLM Visibility Stack™, and Lead Strategist at EWR Digital — takes the conversation beyond traditional SEO into the AI era of discoverability. 

Each week, Matthew dives into the tactics, frameworks, and insights that matter most in a world where search engines, large language models, and answer engines are reshaping how people find, trust, and choose businesses. From SEO and AI-driven marketing to executive-level growth strategy, you’ll hear expert interviews, deep-dive discussions, and actionable strategies to help you stay ahead of the curve. 

Find more episodes here: 

youtube.com/@BestSEOPodcast

bestseopodcast.com

bestseopodcast.buzzsprout.com

Follow us on:

Facebook: @bestseopodcast
Instagram: @thebestseopodcast
Tiktok: @bestseopodcast
LinkedIn: @bestseopodcast

Connect With Matthew Bertram: 

Website: www.matthewbertram.com

Instagram: @matt_bertram_live

LinkedIn: @mattbertramlive

Powered by: ewrdigital.com

Support the show

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
SPEAKER_00 (00:00):
This is the unknown secrets of internet marketing.
Your insider guide to thestrategies top marketers use to
crush the competition.
Ready to unlock your businessfull potential.
Let's get started.

SPEAKER_01 (00:13):
Howdy, welcome back to another funfold episode of
the Unknown Secrets of InternetMarketing, which I am trying to
drop that piece of it.
Okay, so Best SEO podcast, thebest SEO podcast.
We have all those handles.
Um and internet marketingsecrets.
Uh we've decided to drop that,so I need to change the bumpers,
but you can find us anywhere atbest etseo podcast.

(00:34):
This is the unknown secrets ofinternet marketing.
We've been running for 12 yearsstraight.
Um, and we talk about everythingdigital marketing and SEO and
well, AI, because AI has takenover.
And I thought it would be goodas we're continuing to have
these AI discussions to hold up,pump the brakes, and say, okay,
plagiarism, uh AI contentgeneration, how are we ranking

(00:58):
what's going on?
I even remember uh a publiclytraded client we have uh early
on was like 92% uh uh humangenerated, uh rejected.
I was like, well, he wrote it,you know, he wrote it.
And the LLMs are are trained onhuman writing.
So a lot of it's gonna behuman-written.

(01:19):
Like I knew this person writtenthe article, like he knew
nothing about AI.
I was like, there's no way thathe wrote this.
And so um that started theeducation process, AI governance
process to understand uh how weneed to frame these things, how
we need to look at these things.
And uh I wanted to bring on JohnGilham from originality.ai.

(01:39):
Uh, he's got an AI checker,plagiarism checker, fact
checker, because well, there'san issue with we were talking at
the pre-interview, uh, the theintegrity of the internet as a
whole, as LLMs are referencingLLM sources.
You said you just uh completed astudy on on how many AI
overviews are referencing AIgenerated content.

SPEAKER_03 (02:01):
Yeah, thanks, Matt.
Thanks for having me.
Yeah, we uh yeah, we did justdid a study.
Um, I think it's uh sort of wefind it infinitely fascinating
to sort of understand how AIgenerated content is sort of
proliferating across the acrossthe internet.
And so we look at sort ofstudies on where where that's
happening, where it isn'thappening.
We saw with this study we lookedat um hundreds of thousands of

(02:21):
AIA overviews and then that thewebsites that they had cited and
then ran those websites throughour AI detector, our AI
detector, highly accurate, butnot perfect, but across a large
data set is is very telling andand can be relied on for uh for
understanding.
And we saw sort of 15, 15 attimes 20 percent um of your

(02:42):
money, your life um searches uhref citing uh AI generated piece
of content.
And so it sort of yeah,definitely raises the question
of sort of the uh the snakeeating its own tail as some sort
of visualize it.
But if AI is rooting itself inAI, there's a whole world of
problems that can come fromthat.

SPEAKER_01 (03:04):
Yeah, the the degradation uh of data uh as AI
uh feeds itself.
Um I would love to even uh godown the rabbit hole later on
synthetic data um and and howthat works.
That's something that we'restarting to get into and and
test out some different toolson.
But like I remember Elon Musk,right, bought Twitter and then,
you know, for freedom orwhatever, uh, like I think it

(03:26):
was a great move.
Uh, but I think he, I mean, it'sclear, I and I believe this to
be true, he bought it to trainGrok, right?
And he also was tweeting abouthow many bots there were on
Twitter.
And he had to get all of thatsynthetic data or that uh AI
generated data out of therebecause it needs to be trained
on real humans.

(03:47):
That's also why Google did thedeal with Reddit, because you
know, real humans need are areproviding the inputs.
And I think that that's probablywhy Chat GBT or OpenAI um uh
made it free for so many peoplebecause they they need the
engagement with real humans toto train to train these models.

SPEAKER_03 (04:07):
Um certainly any time that um LLMs have have
attempted to do sort of a veryserious effort on training on
synthetic data, it has uh notnot gone well.
Um and so the majority of umLLMs are being trained on sort
of all all data that is all allhuman-created data.

(04:28):
And it's sort of a one of thosesort of um sort of AI paradigm
shifts where you know for therest of humanity, all known
human text data sets have havebeen created.
Um the AI has has sort ofinfiltrated itself into so many
things whether even even whenyou don't think it has, if

(04:49):
you're accepting Grammarlyedits, then there's a little bit
of AI getting added into thatthat human text.
And so for sort of the rest ofhumanity, any human text, any
human data set um will have someamount of AI in it um compared
to sort of pre-2020.

SPEAKER_01 (05:06):
I've never really looked at it that way.
That that is reallyphilosophical, actually, right?
Like that that I I mean we sawit with Grammarly and and some
of these other tools.
Um, you know, Atsio Surfer, whatis it?
Uh surfer, yeah.
Yeah, and like you can't getaway from data that doesn't have

(05:29):
autocorrect or um you know helphelps you ride it.
So there's some level ofinfiltration in all information
going forward, right?
That is that is like wild tothink about.
Uh in certain areas, itcertainly accelerated more.
I mean, what to set the tableeven more?
Uh, I mean, school systems rightare are dealing with this

(05:52):
nonstop.
Like, hey, like, did you writethis paper or not?
And that goes even back to thequestion of when I was in
school, it's like, I'm alwaysgonna have a calculator.
If I'm never gonna do any math,I'm always gonna have a
calculator.
Why do I need to learn a longdivision?
Right.
That was my argument for a longtime.
Um I I think school's changing,like you're always gonna have an

(06:13):
LLM as a co-pilot to, you know,um, or a sidekick to what it is
you're doing.
And you have now the mostintelligent PhD level in every
category at your fingertips withyou at all times.
I I feel like schools should notfight that, but embrace that and
teach people how to think insystems.

(06:34):
And um, you know, but I I don'tknow the rules around the
plagiarism, the fact checking.
I I know I mentioned to you, uh,I've seen in a couple of
conferences someone uh created acity and got that cited, but it
was a fictitious city.
And then I've seen it with faultpublications on like uh sport,

(06:55):
sporting scores or somethinglike that.
It gets sucked up and ingestedso quickly, and then it gets
propagated, and then there'sreference points, and now you're
proliferating fake data.
I think in the election cycles,you're gonna have some, some
it's gonna get pretty bad.
It's already started to getthere.
Um, I mean, just to set thetable more and when we'll zoom

(07:16):
down on you know how to betterrank a site uh using maybe AI
generated content.
What are the rules around this?
Like what are the guardrails?
What are the rules?

SPEAKER_03 (07:24):
Uh yeah, so so in in academia, I can I can talk to
sort of academia and marketing.
Like so, in the world ofacademia, I mean, as as you can
expect, slower, slower to adapt,resistant, some people being
super progressive about saying,you know, LM's allowed, you're
going to use that exactly as yousaid, calculator example.
Others saying this isn't howbrains are formed, where brains

(07:46):
are still soft and they need togo through this lifting of the
weights of sort of writing andthinking through that process to
to get to a fully to be capableof using these these tools to
their full potential.
You don't throw somebody whowith with who has never been a
race car driver in a 300 powerhorsepower charger and say, good

(08:06):
luck, and and is and it's sortof that's the other other
analogy.
And so it's it's I I think youknow, um I've got young kids,
sort of 12, 12, 10A, um the labrats for for like they're
they're lab rats for for sort ofwe as as you know I was for the
internet.
Um and and and it it's uh sowhat is what does that mean for

(08:28):
education?
It that's a tricky question.
Um, I think if if you'restudying at a higher ed
institution and you're uh andthen an LM can get a better mark
than you can, you should mayberethink whether or not the thing
that you are studying isproducing a lot of value in the
world.
Um, and so that I'd say that'ssort of uh that's the big
question, right?

SPEAKER_01 (08:49):
That's a big question that everybody and then
you know you made me think ofthat MIT study that came out
that showed that people arerelying on LLMs for thinking, uh
not not leverage, but thinking,and they're their their brains
are shrinking or whatever, uh,or some some some capacity of
that.
So so I I think any really sharpblade or uh whatever analogy

(09:11):
you're using, there's two sidesof it, and you and you have to
be cognizant.
So I think that's a great uhweighted point there, John.

SPEAKER_03 (09:19):
So yeah, and then and then sort of on the on the
marketing side.
So if we're talking about like,hey, you're in an organization,
you function as a uh marketer,your job is to produce content
that gets you trafficking,customers and uh users, whatever
whatever your objectives are.
Um the the I think the mostimportant um piece that I think
is often being missed right nowis alignment around uh the

(09:43):
proper usage of AI, where it isallowed to be used, where it is
not allowed to be used, and thecontrols put in place to be able
to manage that.
What we're seeing um sort of asort of an extreme example, but
we're seeing sort of internscome in with no guidance around
AI usage, spinning up an API,spamming the site with AI

(10:05):
generated content, and sitesgetting getting um absolutely
crushed by by Google.
That's sort of like the extremeexample.
Um, but the sort of the riskowner, the business, like the
business owner who who is sortof accepting this risk is doing
so with sort of no knowledge andawareness of the risk that
they're they're accepting.
Um, and so that sort of steparound alignment on where it can

(10:29):
and can't be used, whetherthat's in the editorial
guidelines or kind of whateverthe AI usage policy um that gets
created, um that that's sort ofthat first step that I think is
that you were alluding to, butthat often is often missed.
Um, and there's some significantconsequences uh when some people
are just turned turned loose.

SPEAKER_01 (10:50):
Yeah, I think that um executive teams and owners
need to understand thistechnology to understand how
people are using it.
I I see uh a bifurcation uh manytimes of uh legacy businesses
that um, you know, even digitalmarketing, even components of
digital marketing, they don'tcompletely understand uh data

(11:13):
governance, right?
Um and with the LLMs, if you'reconnecting it up internally,
maybe to you know a GoogleDrive, it can pull everything.
I've heard horror stories oflike uh HR materials getting uh
accessed.
And um, if you have an internthat understands how to use this
stuff, sort of, right, andyou're giving them the race car
uh and no rules, and they'redriving it all over the place,

(11:37):
it's gonna break some stuff,it's gonna destroy some stuff.
So I think data governance, AIgovernance is a big topic, as
well as like ethical AI.
Like I think there's some realsocietal uh issues of this
technology seeping intoeverything, and everybody has
access to it.
Um, even there's some real, evenpersonal horror stories of um

(12:00):
guardrails that need to be puton uh AIs.
What what are the what are thebig frameworks or the uh the the
the big um bumpers uh that thatjust you think everybody needs
to know from an educationstandpoint that's maybe creating
content online?
Let's start with the firstprinciple basics here.

SPEAKER_03 (12:21):
Yeah, yeah.
So um I think it's important tosee that like not all not all AI
content is spam.
All spam in 2025 is is AIgenerated.

SPEAKER_01 (12:34):
Um, and that that that's a point that Google came
out and I felt like they werewaving the white flag and said,
Hey, uh that's when they addedto eat expertise authority
trust, they added thatexperience component to try to
say prove it, right?
Because you can just be theexpert.
But they said if it's usefulcontent, AI content is is okay.

(12:57):
So if you're you know working onthe content and you're linking
it and you're citing referencesand you're you're adding you
know images, like that contentis okay.
Sorry, I I kind of cut you off.

SPEAKER_03 (13:09):
Um yeah, no, so so so like so, like back to sort of
like that that first principle,the basics of like, okay, not
all a content is necessarilyspam, but if you just leave it
to its own devices and somebodywithout controls, it'll turn
into spam pretty quickly becauseof optimizing for the wrong the
wrong metric.
And so that that's sort of oneone important guideline.

(13:31):
Um, another sort of likeimportant guardrail, and I think
you kind of alluded to it, butif you're if you're competing on
just words right now, thenyou're you're facing a
challenge.
And so if you're sort of in thebusiness of trying to put out
750 words on topic X, um that'syou're now competing with um
with LMs that can produce sortof you know near top level

(13:52):
intelligence words at near zerodollars at basically the cost of
electricity.
Um and and that's a you'reyou're gonna lose that battle
over time.
And so you need to think aboutum adding value beyond just the
words.
And so that's sort of anothersort of like talk about bumpers.
Like if you think if you'reviewing if you AI left to its
own devices, somebody left totheir own devices going wild

(14:14):
with AI, we'll end up producingspam.
Google is on it has an ex is hasan existential threat um to
their business.
If their search results areoverrun by nothing but AI spam,
why would anybody go to Google?
They would just go to the LLMthat had the most knowledge uh
of them, which might end upbeing Google, but it's not going

(14:35):
to be you 10 10 you know 10 bluelinks a click away.
And so there's sort of theexistential threat factor that
if Google search results areoverrun by AI, um, then then
Google search results as we knowit will die, which maybe they
are now already.
Um, and and and so that's kindof given all of those, uh what I

(14:57):
would say is sort of the theguardrails to to understand when
it comes to um Google doesn'thate AI, but hates AI being uh
the sort of overrunning thesearch results and and having no
extra value being added.

SPEAKER_01 (15:12):
Okay, so the first thing you said, uh we we talk
about a concept called human inthe loop.
Like you need to have somebodyuh in there looking at it,
checking it, like making sureit's optimizing for the right
things.
On on the second piece, Iabsolutely think Google's being
overrun and they need to reallydo a lot better job with AI
mode.
It I think it's pretty bad as ofuh this moment.

(15:33):
I used it, it's very clunky.
Um, you know, Gemini's okay, butyou know, Chat GBT is crushing
it uh because of the positivereinforcement or however they've
weighted it.
Um and I I think that maybe youcan speak to this.
With these Google updates, isthere a threshold of a site

(15:54):
that's like, okay, if it hits20% or 25% AI generated content,
it's it's getting flagged orit's getting um uh deprioritized
uh in the search rankings.
Have you seen anything or knowany data related to that?

SPEAKER_03 (16:09):
Yeah, so we we have some data.
So we're we're we're trackingthe amount of search results
that have AI content uh in them.
We're and we're seeing that uhrise at a much slower rate than
most other online platforms.
So we're seeing medium is atlike 50% of content uh um at
times uh has been suspected ofbeing AA generated.
LinkedIn, over 50% of long-formposts are were in their last

(16:34):
study, were likely AA generated.
Um Google is likely 20% is iskind of where it's it's staying.
And there has been times wherethat has declined, where which
lines up with Google takingsignificant action.
And so March 2024, um, Googledid a manual, I called it a like
a PsyOps update where they did amanual um deindexation on a ton

(16:57):
of sites.
The vast majority of those siteshad the majority of their
content being AI generated.
And so what we are seeing isit's hard to say, like at this
at this percentage, um, you'reyou're at risk.
Well, this percentage you'resafe.
I would say what we are seeingis that um you know Google calls
it scaled scaled content abuse.
Uh, when when a large number ofpages are getting published,

(17:21):
that is something that is easyfor Google to identify.
It looks like at that pointthere is sort of a second-level
um check on the site done, andthen those sites are getting are
getting nuked.
And so if you're scaling contentrapidly, you're very much at
risk of of having having apenalty in some capacity applied

(17:43):
to your site.
Um if you're scaling AI content,you're definitely at risk of of
both getting it flagged and thengetting getting punished.
Um how sites have been using itsuccessfully, um, it it's it's
again really up to if if you Ithink my I'd say my sense is if
you stay off the radar of thescaled content abuse and your um

(18:09):
you helpfulness stays extremelyhigh, and that is sort of shows
up in the in the user metrics umthat they rely on, then you're
you're probably kind of in thatthat okay, maybe in the maybe
okay camp.

SPEAKER_01 (18:23):
Yeah, and algorithms are uh deciding whether you fall
into these categories or not.
So there's definitely weightedthresholds on all this stuff.
How how in your world are how isplagiarism being viewed today?
Because and also hallucinationsby these LMs, um, because

(18:44):
they're just predicting what ismost likely the next word.
And I've found it to be it'slike oh, linked to this content,
and like, oh, that page is notbuilt.
Maybe we need to build thatpage.
That could be a good page.

SPEAKER_02 (18:57):
I you know, um as an interesting, I haven't I haven't
heard thought of that insight,but that's a great insight.
We believe the LOM wants thisthing to exist so that it can
cite it.

SPEAKER_01 (19:07):
Yeah, yeah.
Well, so so we're working arounda term, like there's a lot of
these different terms, like chatGPT, SEO, GIO, AEO.
Like I don't think theindustry's decided on it.
Uh, we've really gone with LOMvisibility and uh said we're
building around LOM visibility.

(19:27):
That's ultimately what this is.
This is what we're trying to do.
And we built like a customframework to to do that and a
strategy uh on on how to makethat happen.
And um, we're even working onand working with some partners
on on some tools and and indexesand things.
And um, you know, there's a lotof noise, uh, but I think back

(19:51):
to your point, uh, which I dowant to talk about plagiarism,
but Google is dying.
I think um Google's trying thisis how I see it.
I see Google's trying to take onAmazon with like buy right now,
right?
We have the search traffic, buyright now, and then chat GBTs
becoming like taking over Googlefrom a search standpoint at

(20:14):
scale, right?
Like they're just winning the atscale.
So I feel like the businessmodels are shifting.
Even Google announced the biglaunch with YouTube, and I think
that that's one of the battlesis uh AI generated people are
coming and are here.
Actually, they're totally here,but the vast majority of people
on YouTube are actual people,and that's really rich content

(20:37):
that uh can be used a lot ofdifferent ways.
And so Google and ads andeverything of what I'm seeing is
being pushed to to YouTube andto like buy now.
Um, I don't I don't know thatthat's at a high level what I'm
seeing.

SPEAKER_03 (20:51):
Yeah, I I mean I think I I I think what what's
certainly evident, and I'd say,well, well, you know, what's
what's gonna you know, rubbercrystal ball, what's gonna
happen with with Google, we'regonna see, you know, sort of in
like basic metric stuff, butit's like we're gonna see
reduced clicks, we're gonna seeincreased conversion from those
those clicks that we get.
The the the funnel will becollapsed.

(21:13):
I think we're seeing thatalready, where people are going
to be in their their LM of ofchoice to uh do the research,
gain the knowledge on whateverthey're wanting to do, and then
their transaction that they'dmove to the web for for
transaction.
Maybe that'll eventually get toyou, you know, to the the
one-click situation of okay, Iplan my trip, go book it all.

SPEAKER_01 (21:36):
Um yeah, oh the age in economy.
You're you're talking the yeah,yeah, yeah, I agree.

SPEAKER_03 (21:41):
And and and so you know, I think I think we're
gonna probably step our waythere.
And I think sort of what theworld will look like over the
next couple of years, maybemaybe you know, hard to make
predictions that age well in AI.
But it's uh it will be canpeople continuing to exist in
the LLMs for that for thatknowledge gaining.
Um as a result, most websitesseeing a reduction in clicks,

(22:03):
especially if it's superficiallayer uh information, and then
um a increase in quality of theclicks that they get from a
standpoint of of conversions,and that yes, LLM visibility and
seeding LMs with the theknowledge that um drives them
towards your uh your solution asbeing the optimal solution for

(22:26):
that for that problem is is Ithink definitely the the name of
the game.

SPEAKER_01 (22:30):
Yeah, and I I know that we're uh SEO podcasts, but
man, I also see to the L the theagent economy, I see crypto,
right?
I see the money of the internetcoming involved in this to be
able to transact.
Like I think there's just somuch transformation that's
happening, and and to yourpoint, uh predictions don't age

(22:53):
well.
Uh who knows where where thingsare are gonna go.
Um and and I like I like, yeah,a lot of it is about seeding the
LMs.
And I talk about right now isthe opportunity, right?
Right now is the opportunitywhere everybody's trying to kind
of figure out what's happening.

(23:13):
Google, I feel like AI overviewswas like a stopgap measure to
keep people from moving to theLLMs so quickly.
But as people use the LMs, now astudy did just come out.
I can't remember who put it out.
I'll I'll have to have them onthe podcast.
But um basically it showed, andthis is early, that data from

(23:34):
the LLMs were just as good,maybe slightly better, but not
um as much better as I thoughtit would be uh than Google
itself.
So I I thought that wasinteresting.
I, however, think that thecustomer journey is about brand
management.
And if people are using lastclack last click attribution, um

(23:58):
who knows, right?
They I you know things come inon an ad, but they've seen it on
Facebook, they've you know, theythey've saw it on Reddit,
they've maybe used multiple LLMsto make decisions.
Um it's hard to say what'shappening and how people are
making this decision.
It's it's really about how isyour brand showing up, how

(24:20):
visible are you uh in theseLLMs.
Uh but you know, I'm dealingwith John entity issues.
So you can see my name here isMatthew Matt.
So I'm showing up in theknowledge graph as two separate
people.
Um because my my titles change,my names change, we've changed

(24:40):
the name of the company, we'vechanged the name of the podcast,
and there's also other peopleout there with my name.
So there's some ambiguity aroundwho is me or what.
And it's uh it's becoming agreater and greater uh issue.
So I'm trying to disassociate aswell as I'm trying to uh overlap

(25:00):
these tasks because my my actualname is Matthew Bertram, and
then Matt Bertram and MattBertram Live are just aliases,
actually, or nicknames orwhatever you want to call it.
And so understanding how allthese things are connected kind
of go back to that concept of ofseeding the LMs, but speaking to
the LLMs in a way that theyunderstand what you're trying to

(25:23):
tell them because they're doinga really great job of having to
sort through all thisinformation and give you the
best possible answer to whatyou're looking for.
So this is really an importantpiece of the future, I think.

SPEAKER_03 (25:37):
Yeah.
No, it's uh it for sure.
We're we're we we have a contentoptimizing optimization tool
within within originality.
Um, and we are sort of adding afeature very shortly that is all
about sort of LLM visibilityoptimization and and sort of
tuning that content to the tothe um sort of what what has

(25:59):
been sort of known to datearound studies around stat
placed high in the uh in in inthe findings quote from expert
sort of what are the featuresthat LLMs want to uh want to
ingest.

SPEAKER_01 (26:14):
Yeah, no, all right, let's circle back to plagiarism.
Okay.
Uh and and start, you can startvery big picture and then
tighten it all the way down tolike what the marketers want to
hear.

SPEAKER_03 (26:26):
Yeah.
Um so I mean plagiarism beenespecially in academia, monster
business.
Um, everyone's been been throughit since sort of the the birth
of the internet.
Um, there's been been plagiarismplagiarism checkers.
Um what we have seen um so bothboth in in that world and in

(26:47):
sort of the digital marketingworld, um the amount of
plagiarism that's happening hasbeen on a massive decline.
Because why would somebodyplagiarize when they can just
copy and paste and get somethingout of Chat GPT?
And and so whether it's beenplagiarized and then rewritten
by AI, um, there there'scertainly some some risk of of

(27:10):
all of those things happening.
Um, but what if Google theGoogle trend is really
fascinating?
We we actually should mentionthis earlier because it's kind
of funny.
We launched um originality.ai uhthree days before Chat GPT
launched.
Uh and when we launched, therewas zero search volume for AI
detector.
Now it's a four to eight millionsearch volume keyword a month,

(27:31):
depending on depending on whenit's happening.
Whereas plagiarism is like uhsub one million.
Um, and so if you look at theGoogle trends between the two,
it's quite fascinating to seethis sort of plagiarism checker
seasonality been there forever,growing year over year.
Chat GPT comes out, a detectortakes two years, but now
skyrockets above aboveplagiarism checker, and

(27:53):
plagiarism checker start isstarting to decline.
And so um I I think it's stillsomething that is worthwhile
checking because there is legalrisk associated with direct
plagiarism as a marketer.
Uh, and so it makes sense tosort of uh include that in your
QAQC process, but the sort ofprevalence of it is is

(28:15):
significantly declining.

SPEAKER_01 (28:16):
Let's talk about this sensor around that.
Um, what are all the major lawsor guideposts that people need
to follow if they're trying tobuild a program and incorporate
AI?
Like, is there references thatthey should look at?
Is there laws they need to beaware about?
Like, what would be the bestkind of governance policy around

(28:42):
this?

SPEAKER_03 (28:43):
Yeah, so so it it depends on the use case.
And so there's certainly fairuse is allowed of of other
people's content depending ondepending on that use case and
depending on the proper citationof that of that content.
And so what are the bestpractices around ensuring you

(29:03):
don't get your business introuble with with plagiarism?
Um, is running a plagiarismcheck, identifying the sources
that get identified, and thendoing a manual review to see is
that truly copied, or is thatjust your own words written,
written the same five, same fivewords can be used in by by
multiple people multiple times.

(29:25):
Um and so I I would say it's umsome organizations so the the
rules around it for most marketswould be around sort of like
fair use.
Um five percent, ten percent,fifteen percent thresholds are
are not uncommon as sort of forfor companies to to sort of

(29:46):
require and then ensuring thatanything is properly cited.
You are you are certainly onsolid footing if you're sort of
ensuring that there's fivepercent or less uh plagiarism in
a piece of work, and thatanytime that there is.
copying or uh other sourcesidentified, um citing, reviewing
if those should be cited or not.

(30:07):
That that gives you a prettysolid footing that you are not
going to get yourself into anyany issues.

SPEAKER_01 (30:13):
No, I I I like that.
I I started to think um images,images, there's a a whole
business out there of you'veused this image for fair, you
know, it's not fair use and youknow they're trying to get money
and people are doing that.
And now uh just like content, uhyou have AI generated images
which are completely new images.

(30:33):
And I was listening to one ofthe uh Google podcasts, the
Google Webmaster podcast, and itwas um it it was saying that
that's totally okay.
It was like that's completelynew that's totally okay.
And then the proliferation ofcontent that they're having to
index and that's why theunindexing uh is starting to
happen because uh you thestandards across the board if

(30:58):
everybody's you know it hasmoved up right like the the
standard has has moved up Iwould love to hear from you
maybe on the uh horror side ofthings as well as like the
awesome side of things some casestudies on how people have used
your tools and and what are someof the success stories around

(31:18):
that.

SPEAKER_03 (31:19):
Yeah yeah sounds good so I'd say kind of a uh a a
horror story um and weunfortunately hear it too often
um but someone typically awebsite owner will come to us
and say hey our site just gotde-indexed we weren't using AI
um and it's like okay like melike maybe um let's like run we

(31:40):
have a website scanning featureum we feel terrible here's
here's a bunch of credits thatthat that has happened to your
your business um go go runwebsite scan and see what the
what the tool says and theninevitably there's a point in
time so you can see this graphof the website's content and
inevitably there's a point intime where whether something

(32:02):
something changed in that inthat editorial process and all
the content went from call itone post a week to eight posts a
week and they were all aigenerated and then the the the
site the site tanked um thethose are those ones suck
because there's there's realpain associated with those um

(32:22):
businesses laying off employeesum livelihoods lost because a
website got tanked becausesomebody on that team was taking
on risks that the um risk ownerthe business owner didn't didn't
understand um and so that that'sbeen a been sort of a a pretty
common use case that that wehave we have um seen where it's

(32:44):
it's definitely a a horror storywhen we see it have you seen a
threshold I feel like google'staking a lot longer to index
stuff I don't know the sandboxterminology uh as well as the
the the delisting or unindexinguh of the content did you have
you seen like a a point where umthey're they're indexing it but

(33:09):
they're not really ranking itlike there's a lag time and
maybe they're they're running itthrough some of these systems or
also is there like a grandfatherperiod that I feel like older
content does better it therethere's some equity accrual uh
or link equity accrual that thatpotentially happens over time

(33:31):
from what I've read uh in thethe um the the trade the the the
the patents or whatever so umthere's a lot of tools now that
you can pull out from thepatents like what's happening
but I wonder if there's a way toto know where that cutoff is or
where that threshold is becausenew contents taking a lot longer

(33:55):
to to get indexed.

SPEAKER_01 (33:57):
And so when we're working with clients and you
know they're they usually comewhen they're in a bad situation
you know to turn it around we wewant it to happen quickly but
it's happening a lot slower thanwe would like to see it.
Like Google doesn't turn on adime anymore.

SPEAKER_03 (34:12):
Yeah I mean I think I think core I mean it it seems
like there's more there my senseis that there's less movement
between core updates than whatpotentially used to be the case
in in Google.
And so like movements happenwhen those core updates are
happening.

(34:33):
In between those core updates umthere's not as much movement in
in the results as as there usedto be I believe that like that's
certainly what we sense andthat's I believe that's kind of
what the data data supports.
And so I think that lends tosort of the the sense that new
content getting publisheddoesn't takes longer to to drive

(34:53):
results because it's sort ofwaiting for that especially if a
site is sort of in that in sortof a a down a downward
trajectory it takes that tillthe next core update to to sort
of see that okay we've addressedthe eat issues or or whatever
whatever it might be.

SPEAKER_01 (35:09):
Yeah yeah I there's a lot of conversations I think
at Google around trust likelevels of trust and you know
thresholds and you know there'sit it depends right that's a
horrible answer.
I know yeah that's not alwaysthe answer.
What are some really positiveuse cases uh that you've seen

(35:30):
with originality uh AI to helphelp save something or saw
something and cut it caught itbefore maybe they they got
penalized by by Google orsomething.

SPEAKER_03 (35:41):
Yeah so we we we I mean the the common use case for
originality is that there is ansomebody functioning as an
editor within a marketingcompany within a a website
writing team they function as aneditor they have their team
submit content to them and thenthey run a QAQC process on on
that content.

(36:01):
Some of the things that commonlyhappen are writers swear up and
down that they didn't use any AIruns through the tool says says
that it was AI and then theyshare the the and we have a free
Chrome extension for thesesituations where sort of the
writer says I didn't use AI thetool says you you did um you you

(36:22):
likely did the tools providesort of a probability um not an
absolute judgment um and then itsays then then the editor takes
that document the Googledocument to our free Chrome
extension which then visualizesthe entire writing process and
uh the at that point the rightthe editor can see that the
writer just copied and pasted umthe entire text and the doc and

(36:46):
the document yeah in thedocument couple formatting
changes and then got 100% AIscore shows that to the writer
and the writer says yep you'reright I I apologize I lied um I
did use I did use AI.

SPEAKER_01 (36:58):
So go back to that one piece.

SPEAKER_03 (37:01):
So if someone's cutting and pasting something in
a document there are markers ortokens what are what are like
the little dashes right thatthat come out um because AI
can't uh uh it's easier to puttogether a thought without
putting together a full sentenceI think it's just um and so I
think they put that I I feltlike it was like a watermark for

(37:23):
a long time um like what whatare the fingerprints what are
the telltale signs thatsomething's AI generating yeah
it it's um it's an unsettlinganswer that I'm gonna provide um
but and and it's sort of similarto like ask ask John Mueller or
anyone at Google you know whywhy is this ranking above this

(37:44):
and it'll be a bunch of sort ofgeneral platitudes um that that
but the reality is they don'tknow um it's it's it's an AI
it's a black box they can sortof understand its behavior on a
on a large scale and similarlywe can understand our detectors
behavior on a large scale but onany individual piece of content

(38:06):
why did this why was thisidentified as AI or not that's
very challenging to to do umbecause the AI detector itself
is a black box uh we have afeature coming out called a deep
scan which looks at um uh a uhunderstanding better better
understanding um how the textcould be sort of adjusted if it

(38:29):
was incorrectly identified as asAI so that people can sort of
writers can sleep at nightknowing that their work is going
to pass an AI detector.
And so it's it's a unsettlinganswer.
There there are some things thatum can lead to it.
So if it if it is AI content itis more likely to get identified
as as AI content.

(38:50):
AI content if it is highlyformulaic um highly formulaic
very structured writing um oroddly formatted those can lead
to so highly like highlyformulaic highly structured
writing can lend itself tolooking more like AI um oddly
formatted text will reduce theAI detector's accuracy and then

(39:14):
therefore will result in moretimes it being identified as AI.
So uh specifically your tool youwere saying that like it was
like cut and paste it like is isthe tool looking at the number
of drafts or how long thedocument is that what it's
looking at how long the documenttook to generate the document
yeah so so so sort of bestpractices are that an editorial

(39:37):
team says if if they're usingsort of the originality tooling
is that the editorial team sayswriters you must use a Google
document from from start tofinish and then that Google
document will get a score for AIthe probability of it being AI
and then it will receive um areport that shows the length of

(40:00):
time the number of writingsessions characters per
characters over time and the ifif you see this sort of thousand
word document that was worked onfor two hours with a bunch of
edits and deletes and then onelittle section is identified as
potentially AI and you've workedwith that writer for a long
period of time you can beconfident that no this you can

(40:20):
sort of see the writer havehaving read it written it um
it's mostly so the the detectorhas said this is likely human
but there's a some someuncertainty you look at the
Google document the document andthat and that chrome extension
and you can see them putting intwo hours of work into that into
that piece of of content you canbe fairly confident that that is

(40:41):
that is human generated human isin the loop um and this isn't
just sort of a copy paste out ofchat GPT got it perfect okay so
is there anything that I'm mybrain's full so um is there
anything that we can that wehaven't covered that you think's
really important for us todiscuss based upon our current

(41:05):
conversation I I think a couplea couple things that I always
love to sort of like make sureare are sort of understood.
Like I think there's a lot ofthis is something that didn't
exist like the from a searchvolume keyword A detectors
didn't exist um you know acouple couple years ago um and
and there's been a lot ofmisunderstandings around around
them um they're highly accurateuh not perfect um and so sort of

(41:27):
on any individual case they theycan have false positive they can
have false negatives where theyincorrectly identify something
as AI or human and so that'sthat's important to sort of
understand um and then thesecond piece is that um
originality in most toolsprovide a classification of AI
versus human and then aprobability um a confidence

(41:50):
score so it's I think it was athe AI detector will say I think
it was AI or I think it washuman and here's how confident I
am in that prediction that oftengets misunderstood as I'm 70%
the this content shows up as 70%AI 30% human um and that's
that's not exactly what thatmeans.
It's a binary classification AIor human and then a a confidence

(42:15):
score in that prediction.

SPEAKER_01 (42:16):
Got it okay very cool.

SPEAKER_03 (42:19):
So what are the like biggest takeaways or biggest
tips that you can give tomarketers today that are jumping
in headfirst into AI because youneed to be um uh but doing it uh
in a in a responsible way yeah Ithink first is is ensuring that

(42:41):
the risk owner of your companythat you're doing marketing for
under understands the and agreesupon the the where the AI is
being used and the risksassociated with that.
So I'd say that's first secondis um ensuring you're in the
content that you're producingensuring you're adding value
beyond just words.
If you're competing with wordsyou are going to you're you're

(43:03):
competing against sort ofinfinite free words um that
that's hard.
And so finding ways to add valuebeyond words tools graphs
primary data um finding findinga way to add um and then third
um it's you know I've been sortof doing SEO for long enough
it's always very very verytempting to find the that

(43:24):
shortcut and click a button umthat that doesn't that does that
has never and and doesn't existnow that that makes it makes
this whole process easy.
I love that well how do peopleget in touch with you find out
more about your work yourstudies um they can of course go
to the website but I'll let youkind of share uh your your

(43:45):
handles and stuff yeah so umyeah we publish publish studies
constantly uh heavy focus aroundsort of how ai is impacting um
the the sort of internet and howmany people are using lms dot
text we have we have a studysort of running tracking the
number of websites that areusing that which is always
interesting to see um you cansee all of that at

(44:05):
originality.ai and sign up forour our newsletter and we we
keep sharing um people can getin touch with me uh j-on at
originality.ai or uh find mefind me on LinkedIn.

SPEAKER_01 (44:17):
Awesome well john gilham everybody uh thank you
john so much for coming on ifyou got value out of this
podcast please go to theplatform you're listening on and
leave a quick review it would besuper helpful um you know if it
has to be AI generated that'sfine uh but uh we really need
some reviews uh from real peopleand uh I you know if there's

(44:42):
things that you would like tohear about or topics or uh
feedback please please leavethat as well.
We want to engage with you.
This is an exciting industrythat is growing.
I think it's transforming fromjust you know SEO and a vertical
just like traffic is left 50% oftraffic has left the website and

(45:06):
has gone everywhere else um LOMsare are coming into it as well.
So uh share this with somebodythat you thought was valuable
like it tag us uh Shaiko usshare like follow we really
appreciate it um if you want togrow your business with the
largest most powerful tool onthe internet uh which I guess is

(45:26):
well bigger than the internetnow maybe soon to be uh LLMs uh
reach out to EWR for morerevenue in your business uh
follow me on LinkedIn I'm tryingto post more we are launching
our LOM visibility certificationvery soon uh John thank you so
much uh for coming on until thenext time everybody my name is

(45:49):
Matt Bertram bye bye for now
Advertise With Us

Popular Podcasts

Stuff You Should Know
My Favorite Murder with Karen Kilgariff and Georgia Hardstark

My Favorite Murder with Karen Kilgariff and Georgia Hardstark

My Favorite Murder is a true crime comedy podcast hosted by Karen Kilgariff and Georgia Hardstark. Each week, Karen and Georgia share compelling true crimes and hometown stories from friends and listeners. Since MFM launched in January of 2016, Karen and Georgia have shared their lifelong interest in true crime and have covered stories of infamous serial killers like the Night Stalker, mysterious cold cases, captivating cults, incredible survivor stories and important events from history like the Tulsa race massacre of 1921. My Favorite Murder is part of the Exactly Right podcast network that provides a platform for bold, creative voices to bring to life provocative, entertaining and relatable stories for audiences everywhere. The Exactly Right roster of podcasts covers a variety of topics including historic true crime, comedic interviews and news, science, pop culture and more. Podcasts on the network include Buried Bones with Kate Winkler Dawson and Paul Holes, That's Messed Up: An SVU Podcast, This Podcast Will Kill You, Bananas and more.

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.