Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:01):
AI is changing
content marketing, but how much
AI is too much?
Can AI-generated content hurtyour search engine rankings and
how do you ensure the contentyou publish is high-quality,
original and optimized forsuccess?
In this episode, I sit downwith John Gillum, co-founder of
Originalityai, to discuss therisks and realities of AI
(00:23):
content, google's stance onAI-generated content and the
best ways to ensure your contentstays competitive.
So stay tuned to this nextepisode of the your Digital
Marketing Coach podcast.
Speaker 2 (00:37):
Digital social media
content, influencer marketing,
blogging, podcasting, vlogging,tiktok, linkedin, twitter,
facebook, instagram, youtube,seo, sem blogging TikTok,
linkedin, twitter, facebook,Instagram, youtube, seo, sem,
ppc, email marketing.
There's a lot to cover.
Whether you're a marketingprofessional, entrepreneur or
business owner, you need someoneyou can rely on for expert
(00:59):
advice.
Good thing you've got Neil onyour side, because Neil Schafer
is your digital marketing coachhelping you grow your business
with digital-first marketing,one episode at a time.
Speaker 1 (01:25):
This is your digital
marketing coach and this is Neil
Schafer.
Hey everybody, this is yourdigital marketing coach, neil
(01:48):
Schafer.
Tactical advice on digitalmarketing to get some nugget,
some insight, some actionableadvice that you can leverage in
your business today.
That is what this podcast isall about, so thank you for
joining me Now.
We all know that AI continues torevolutionize content marketing
, but, as they say, with greatpower comes great responsibility
.
Google has made it clear thatlow quality, mass-produced AI
content won't cut it.
(02:08):
But where do you draw the line?
How do you balance efficiencywith originality?
Pun intended, in this episode,I talk with John Gillum,
co-founder of originalityai.
Many of you know it as an AIdetection tool.
It's actually a lot more thanthat, but it is a tool designed
to help businesses detectAI-generated content, but also
(02:29):
check for plagiarism and, indoing so, improve the quality of
your content.
We dive deep into how AIcontent is affecting search
rankings, google's latestupdates including that helpful
content update that I talk a lotabout on this podcast and what
steps businesses should take toensure their content stands out
for the right reasons.
Whether you're a business owner, a marketer or content creator,
(02:52):
understanding how AI is shapingthe digital landscape is
crucial for staying ahead, solet's break it all down and
figure out how to work with AIintelligently without getting
penalized by Google.
Without further ado, here is myinterview with John Gillum.
Speaker 2 (03:07):
You're listening to
your Digital Marketing Coach.
This is Neil Schaefer.
Speaker 1 (03:15):
Hey everybody, thank
you for tuning in to another
live stream edition of the yourDigital Marketing Coach podcast.
Ai content, google these arethe trifecta of technologies, of
companies that, as marketers,as entrepreneurs and business
owners, we really need tounderstand.
How much is enough, how much istoo much, should we even be
(03:39):
using AI at all in terms of ourcontent, and what effects does
it have on our Google searchresults as well as other search
engine algorithms?
I can think of no one else totalk about this topic with than
the founder of one of theleading, if not the leading, ai
detection tool, originalityai, atool that I use with my own
(03:59):
guest bloggers to detect AIcontent and I highly recommend
you check out today.
But regardless, we're going totalk all about these risks and
really, how do we live togetherwith AI, because it is becoming
almost a permanent part ofsociety.
So, without further ado, I'mgoing to introduce to you the
founder of OriginalLadyai, johnGillum, or co-founder.
(04:20):
I always get that wrong, john,but either way, welcome.
Speaker 3 (04:22):
Yeah, thanks, Neil.
Thanks for having me Happy totalk about this stuff.
Speaker 1 (04:26):
Yeah, really excited
to dig in, John.
I always ask my guests beforewe dig in.
Obviously, AI.
Well, AI has been around fordecades, but generative AI is
relatively new.
So what brought you into doingwhat you're doing at
originalityai?
What did you do before that?
Speaker 3 (04:40):
Have you always been
involved in AI or high tech, or
have you always been involved inAI or high tech or yeah, no
more heavily involved in contentmarketing?
So sort of the source of sortof getting into the space was I
was, yeah, deep into the worldof content marketing for a
number of years, sort of startedthat journey in sort of 2008,
2009 timing and then I've been,yeah, heavily, heavily involved
(05:00):
in that space since launched acontent marketing agency, had
sold it at one point was theheaviest users of Jasper,
predating ChatGPT, and then wehad seen this sort of wave of
generative AI coming and didn'thave the right controls in our
agency to be able to know whenwe were or were not using AI.
Speaker 1 (05:19):
Got it so curious.
Were you at Jasper's generativeAI conference in early 2020?
No, I wasn't.
Okay, I was there.
I thought I might have seen youthere.
So, so cool.
So you come from the world thatmost, if not all, of my
listeners are very familiar withcontent marketing using jasper,
which I have also used in thepast, and both the the
revelation that, wow, this canreally change things, but, on
(05:42):
the other hand, well, should wereally pursue you know this much
AI in our content?
What have you?
So?
What brought you from there?
So you were at an agency, soyou really did.
You develop originalityai fromthat content marketing agency,
then saying, hey, we really needto put some control over this.
Speaker 3 (05:59):
Yeah, well, so we had
sold the agency and then had
seen the and sort of in there inthat, along that process, I was
then looking at sort of what todo next and I had a portfolio
of websites, had some other sortof businesses on the go that
are again all sort of aroundthat sort of like marketing
world and specifically contentmarketing world, and then had
seen this sort of.
(06:19):
Yeah, there was a very clearneed for both a modern
plagiarism checker you know, Ithink a lot of people have used
Copyscape.
It's got some problems whenyou're trying to use it at scale
and so, trying to create amodern plagiarism checker,
modern content quality QAQC tooland then AI checking was a
piece of that, but only going tobe one piece of it.
Speaker 1 (06:49):
And then, when we
launched, the weekend before
ChachiPT launched, and then,sort of, you know, the world
changes as it relates to theneed for, or in the sort of
interest in, generative AI.
At that point, gotcha, thatmakes a lot of sense.
So just I guess it was theperfect timing, the perfect
storm.
Speaker 3 (06:56):
Yeah, I mean I wish
it was a little earlier, but I
mean it's like still better thanafter.
Speaker 1 (07:01):
Well, that answers my
question as to why there was
also this plagiarism checkercheckbox whenever I do a check
originalityai so when youstarted out, then you know
plagiarism checker, also AIchecker and then ChachiBT.
Are there particular industriesor types of companies that you
found drawn to your tool becausethey have a more urgent need
(07:23):
when it comes to, you know, forinstance, regulated industries
or what have you?
Do you see any trends or do yousee just general interest in in
your tool?
Speaker 3 (07:31):
So we've seen a ton
of general interest and we don't
love that, like we.
So who we're building for arepeople that are working as a
copy editor.
So basically, you know you,when you, when you're receiving
guest posts, like you described,you're operating as a copy
editor Does this piece of textmeet my standards?
And we look at AI, plagiarism,readability, grammar and
(07:55):
spelling, fact checking, and sothose are all tools that we've
built into Originality becausewe're building it for somebody
that functions as a copy editor.
So anybody that's publishingcontent on the web, that's who
we're building it for.
That's who we see has the sortof a big need, because the risk
to their businesses is very,very high and the sort of
efficiency of doing a check isalso very high.
With originality, where we'veseen a lot of interest is from
some other industries thataren't always as great a fit
(08:18):
Academia.
We have tons of users fromacademia I can imagine Don't
love that use case withinacademia.
They're highly accurate but notperfect, and within academia,
when there's academicdisciplinary action on the line,
you need perfection, you needsort of certainty of proof, and
we just can't produce that.
No, no detector can producethat?
Speaker 1 (08:37):
And that was going to
be my question Can any detector
?
I have a son in high school whohas been.
You know the schools all use atechnology called turn it in.
Yeah, and just you know he's.
He's already gotten pinged bythat when.
I believe it was and I trust myson I believe it was a false
positive.
So I know, you know, usingoriginalityai, I used to usea
different tool which justflagged everything as AI, even
(08:59):
content that I personally wroteright, even without using a
grammarly.
So it is interesting.
So would you say then, when youtalk about originallyai and
we're going to go into the wholewhat Google says about AI and
all that that we plan to, butwould you say that you, you know
what originallyai is AIdetection tool but it's a lot
more and it's really aboutsupporting the content creation
(09:20):
in the marketing departments ofenterprises.
Would that be a better way todescribe how you put your
company now?
Speaker 3 (09:25):
Yeah, I mean so AI
detection is the thing that the
majority of users have come tous for and we are sort of the
most accurate, especially onsort of web content that we've
been built for.
But, yeah, we view ourselves asa complete AI assistant, a
complete assistant for anybodypublishing content on the web.
So, exactly as you described,we want to get to the point
where it becomes irresponsibleto publish content without
(09:46):
having it run throughoriginality and making sure that
it sort of meets yourrequirements.
We want to be at a point where,even if you're okay with using
AI and you know it's going toget identified as AI, but you
don't want it to be plagiarized,you don't want to be factually
incorrect and you want sort ofthe right readability range.
We want it, in the end, to besort of like, yeah, that final
check for every piece of contentthat gets published on the web.
Speaker 1 (10:07):
Well, that's great to
know.
I actually religiously useCopyscape for all of my writers,
so I will definitely bechecking out.
I just didn't see originalityaiin that light right, I always
thought of AI detection, butit's really about improving the
quality of that output that youpublish in the web.
So, with that in mind, googleAI you know we had the helpful
content update.
(10:27):
We've had sort of spam updates.
A lot of websites, including myown, even though I don't use I
don't think I use AI content hasgotten hit.
So what are you seeing as adeveloper of this technology of
you know?
Let's set the record straightof what Google is saying about
the use of AI content the recordstraight of what Google is
saying about the use of AIcontent.
Speaker 3 (10:48):
Yeah, so I think
there's a lot of dogs in this
fight around sort of like tryingto interpret what Google says
and like, hey, do we believeGoogle?
I mean, that's a dangerous gameto play as well, of sort of
believing everything that theysay versus what we see.
So I'll try and I meancertainly I probably have my own
bias coming at this from sortof thinking that AI matters.
What Google has said is that atfirst they said we don't want,
(11:09):
we don't want spam, we're okay.
However, that spam is created,we don't want it.
We want content created forhumans.
The sort of the mental exercisethat I sort of like to think
through this in is if Googlesearch results were filled with
nothing but AI, why would anyonego to Google?
Why wouldn't they just go tothe AI?
And so it's an existentialthreat to Google.
If their search results getoverrun by nothing but AI,
(11:32):
that's probably going to killthem.
But on the flip side, so whydon't they say that?
And then, on the flip side ofthat is they need to be this AI
forward company because they'refacing this sort of their Kodak
moment.
Where are they going to gettotally disrupted and killed by
a technology that they invented,and I think that that's why
they had their red alert.
That's what significantlyscares them.
(11:53):
And so they need to be thissort of ai forward company but
at the same time, not let theirsearch results get overrun by
nothing but ai, because then noone would use google anymore.
So I think that's that's sortof what google has said is like,
hey, we don't want, and then wecan get into the sort of the
data that we've seen to sort ofsupport what they've said.
But what they've said is likewe don't spam.
You mean, we want contentcreated for humans that add
(12:14):
value, not for search engines.
And then the sort of thecontext of what they've said is
that they face this existentialthreat if their search results
get filled with nothing but AI,and they have to be this AI
forward company because that'swhat where the world is going.
Speaker 1 (12:28):
So would you say then
and this is sort of the
conclusion that I've come to,and just look reading between
the lines of like helpfulcontent, update, provide your
own personal experience thatthey don't really care if it's
AI generated or not, but theywant content that is personal,
based on experience, things thatAI can't or is very, very
difficult for AI to do.
At the end of the day, if theyhave a happy user, then they are
(12:51):
happy as well.
Would you say that's true orwould you challenge that right
now?
Speaker 3 (12:56):
I think I would still
challenge it in that I think
anything that comes close to thepoint where somebody can press
a button and it publish contentand they add no value is, if
that is the machine, thatthere's.
No, I don't believe there's anypath to sort of like creating
the optimal prompt thatgenerates content en masse, that
adds value, that is asustainable opportunity to get
(13:18):
traffic from Google.
I think there's like someprogrammatic plays with unique
data, but in terms of like hey,here's this prompt with a little
bit of enrichment and a littlebit of rag and this now adds a
lot of value, I think that's.
I don't believe that is astrategy that has any
sustainability to it.
Speaker 1 (13:35):
Yeah, no, I agree
100% and in fact it's funny,
john, I saw a webinar thatJasper did of how they use
Jasper in their marketing andthey lean heavily on AI, not in
the actual creation, but in theideation of creating content
briefs for writers, what haveyou?
So it is this coexistence,right, yeah, but yeah, I know
that there are companies outthere that are promoting one
(13:56):
click and I know that maybe somesites at the beginning are
successful with that strategy.
But you know, google does comearound, they make their rounds,
they do their checks and I don'tknow I'm sure you'd agree, I
don't know if that's a long-termsolution or not yeah, no, I, I,
I fully, fully agree and yeah,I would.
Speaker 3 (14:11):
Yeah, some very clear
data that Google has come out
and like this sort of this, likeI'd say, there's like two zones
.
There's a spectrum on likehuman to AI and then and then
cyborg.
It's very clear that, like onmass published AI spam, google
hates that.
There's no I'd say there's nouncertainty.
The data has supported that.
They've communicated it andthat's what we've sort of
(14:33):
consistently seen, yeah, solet's dig a little bit deeper.
Speaker 1 (14:36):
You mentioned you
have data, and one of the things
I know you wanted to talk aboutwas the amount of AI in Google
search results.
So what are you know?
What are you seeing right now?
Is it already inundated with AI?
Has Google been able to keep AIout?
What's the current status fromyour perspective?
Speaker 3 (14:54):
Yeah, so we're
running a study where we're
looking at search results,historical search results and
the amount of AI in thoseresults, by sort of looking at
the Wayback Machine when it'snot being hacked so challenging
right now, but we might have totake a month off of that data
but we're looking at the amountof AI in Google search results
and then graphing that over time, and so what we've seen is this
sort of two 2018, sort of like3% range, which sort of falls in
(15:18):
line with our false positiverange, and then that's been
growing and then growing at anaccelerating rate up to about
14% right now.
So 14% of the web is AIgenerated.
Again, not all not all AI isspam, but all sort of spam in
2024 has likely been AIgenerated.
So that's what we're seeingright now.
(15:39):
And then, in terms of sort ofwhat we're seeing from how
Google has attacked AI content,the March 2024 update, march 4th
manual actions, were I kind ofthink of it as like a psyops
update, where they did thismassive marketing push around
the update and 40% of spamcontent is going to be gone
overnight, and then they didthis manual action across a ton
(16:02):
of sites 2% of all sites thatwere on any of the popular ad
platforms like Ezoic, mediavine,raptive, 2% of them were all
manually de-indexed on the eveof the start of that update
rollout.
The majority of those sitesthat have been de-indexed, the
majority of their content was AIgenerated.
(16:23):
When we dug deep into that,into the sites that had been
de-indexed, what was commonamongst them and it was very
clearly mass published, aigenerated content was what
Google had attacked, and so it'slike they are.
So they won that battle butmight be losing the war in terms
of AI getting into their searchresults.
Speaker 1 (16:43):
Well, yeah, that was
my question.
So that 14% going up from 3%,I'm assuming, even though with
the helpful content update, withthe anti-spam updates, it's
still at 14%, then Is that acorrect assumption?
Speaker 3 (16:54):
content update with
the anti-spam updates, it's
still at 14%.
Then Is that a correctassumption?
Speaker 1 (17:01):
It's still at 14% and
been increasing, had some blips
where it came down, but it's ona pretty consistent march up,
yeah.
Now, when you look at how youdefine that AI content, is it
that content that appears in thesearch results has and I know
that you have like a zero to100% meter on originalityai.
Is it like above, you know, 50%or what's sort of the point
that you take to consider itbeing AI generated?
Speaker 3 (17:21):
Yeah, so the way to
think and this is kind of one of
those sort of so many of ushave used Copyscape or other
plagiarism detectors for yearsand so we're trying to sort of
apply that same scoringmechanism to AI.
And I tried to as well and itwas challenging.
But the way detectors work,they're sort of a family of
machine learning called aclassifier, and that provides a
(17:42):
prediction on is it likely AI orlikely human, and then the
percentage is a confidence score, and so the way to interpret it
is if it says it's 55% AI, thenit says I think it's AI, but
I'm only like 55% confident.
So I'm not very confident.
But I'm saying that I think AIwas heavily involved in the
creation of this content and I'm55% confident.
(18:02):
Not that 55% of it is AI, 45%of it is human Gotcha.
Speaker 1 (18:07):
So if I guess let's
look at this a few different
ways.
I know that there are somecompanies out there, despite my
advice, that just hit that youknow, generate content with AI
and they publish it.
What is your recommendation forcompanies that realize that
they might be considered part ofthis AI spam, even if they
haven't gotten a hit before?
Do you have any recommendationfor them on what they can do to
(18:30):
help mitigate potential risks ofalready having published AI
content?
Speaker 3 (18:35):
Having already
published it.
If it's working and your site'snot super valuable to you
otherwise, then ride it right.
Just know it's going to crash,right, you know.
I think you know, having beensort of in the world of SEO and
content marketing since you know, 2010, 2008, timing it's.
It's like we know there aretemporary arbitrage
(18:58):
opportunities with Google.
Mass publishing AI contentsometimes produces those
opportunities.
It usually doesn't end well,and so I would say there are
different businesses.
There are some sites that Ihave that I test AI content
because I'm trying to fullyunderstand this world.
But there was sort of the viraltweet called like the SEO heist
tweet, where they were theindividuals working as a content
(19:20):
marketing agency, essentiallyfor a real company, for like a
software business massively grewtheir traffic and then went,
went to zero, so basicallykilled, killed the organic
traffic to that business byusing an aggressive strategy.
So my, my advice to anyonethat's publishing content is to
whether they're using AI or not,is to understand those risks
(19:43):
and then make a heads updecision on whether or not you
want to be using AI or not, andnot leave that to a writers or a
marketing agency.
But the publisher should be theone that is accepting that risk
because they genuinely are.
The agency might have gottenpaid for a piece of that content
, but didn't live with the risk,and so that's what I'd say.
So if they really care aboutthe site, rip out the AI content
(20:05):
.
If they don't care about thesite but it's working from a
traffic standpoint, ride it butexpect a crash.
Speaker 1 (20:11):
Yeah, really, really
good advice.
So you already mentioned whatGoogle does to sites that it
considers generating AI spam interms of de-indexing, and we
know that for those that arerelatively new to SEO, you might
not see it, but I compare it tobeing a stock investor and
seeing a few crashes over theyears.
You sort of see that themarkets do even themselves out
(20:32):
and I do believe the searchengines do as well, right?
So, based on that, if there isa business owner, entrepreneur,
listening, that has outsourcedcontent creation or that has had
guest bloggers, how do they goabout determining the risks of
that content, of having AI, ofmaybe having too much AI?
That may be just having alittle here and there, as long
(20:53):
as it's really helpful and itincludes personal experiences,
okay.
But obviously I'm assumingyou're going to recommend a tool
like originalityai.
But even with that tool, youmentioned that there's no 100%
certainty.
It's based on probability.
So how would you go about ifyou were the owner of a company
that was outsourcing contentcreation to a content marketing
agency?
How would of a company that wasoutsourcing content creation to
(21:14):
a content marketing agency?
How would you go?
Speaker 3 (21:15):
about using your own
tool.
Yeah, so we have a tool Ibelieve we're the only one that
has it, but it's a site scanfeature and so you can load a
site in and it will scan.
It will crawl all the pages andthen provide a score on likely
AI or likely human for everyarticle that's on that site, and
then you can see that by author, by publishing date, and then
you can get a sense for how muchcontent has been published on
(21:36):
your site up to that point andthen you can decide what to do.
Is it specific authors?
If we were to run that on ourown site, we do have some
content that is AI assisted,especially from some of our
research team where they'reEnglish as a second language
individuals wickedly smart, butthey're using AI to sort of
(21:57):
assist in that content creation.
So what I would do is use thesite scan feature to understand
the risk and then work with theagency to understand what their
policies and controls are inplace and be very scared of
anyone that says our writersdon't use AI.
And then it's like how do youknow?
Speaker 1 (22:13):
Yeah, and I guess
even using something like
Grammarly is AI-assisted right,whether we like it or not.
Speaker 3 (22:19):
Yeah.
Speaker 1 (22:19):
So tell me a little
bit more about SiteScan.
I think I've seen it on thesite, I don't think I've used it
before.
So someone opens it and comesup with originalityai.
And I'm just thinking about myown site.
I have 750 blog posts, multipleauthors, so on WordPress.
So is it able, just by a scanit's going to go through and by
author by post it's going togive me that report, correct?
(22:40):
And I guess my other questionthen is how much time, like for
a 750 blog website, is this likea 24 hour thing?
Is it immediate or you know,walk us through sort of the
process and what the outputlooks like.
Speaker 3 (22:55):
Yeah, so two, two
options.
One option is not yet available, will be available in probably
a week or two, but the the sothe current option is yeah, go
into the site scan feature oforiginality, enter your URL.
You can control a few optionson how many posts that you want
scanned.
Do you want sort of a random100, recent 100, or do you want
all 750 scanned?
It then you know you, clickstart gives you an estimate for
(23:19):
the cost.
Click start, it will go out andscan that content.
Timing to do 750, assuming kindof 1,000 words per post be
under 15 minutes, not sittingthere, scroll and done.
So it'll take a bit of time butbut yeah, probably under 15
minutes to complete sort ofscrape, all the content, scan
(23:39):
all the content and then providean analysis.
The second option you mentionedis on WordPress.
But we have a WordPress pluginthat's coming out very shortly
that will scraping can producesome inefficiencies in terms of
the content, whereas sort of thewithin the of WordPress it can
be a lot cleaner and so you canget.
So we have a WordPress pluginthat can achieve some very
similar results in terms of scanall posts, filter on an author,
(24:02):
scan all those authors posts,et cetera.
Speaker 1 (24:04):
Will the WordPress
plugin allow you to also do that
by category?
Speaker 3 (24:09):
Yes, Basically any
filtering mechanism that exists
within WordPress posts or pages.
However, you can filter inthere.
You can then select the postand then scan those posts.
Speaker 1 (24:18):
And then the output
of that is going to be obviously
it's going to be a lot of databy post, by aggregation, by
author, category, date.
Maybe Is it going to be averageprobability or confidence that
AI was included.
Is there any additionalinformation that's being
provided?
Speaker 3 (24:33):
Yeah, so it's the.
The scoring is always the samewhere it's likely AI or likely
original, and then theconfidence score, and so it'll
say likely AI, 99%, confident,or it'll say, like the original,
a hundred percent confident.
So that's likely AI, like theoriginal.
And then the confidence.
And then within the, theWordPress book doesn't have
(24:53):
quite as much reporting, butwithin the SiteScan app you have
a lot of ability to filter andlook at graphs over time, of
kind of really understand howsort of AI has been infiltrating
your site, potentially withoutyour knowledge, if you haven't
had controls in place.
Speaker 1 (25:11):
So we talked about
that poppy scape, which is
plagiarism.
We talked about the AIdetection.
One thing we didn't talk about,which I think your tool also
provides, is a fact-checkingcapability.
I'd like to learn more aboutthat, because I see, even with
guest bloggers, they will sourcestatistics without giving the
source of that statistic right.
Or a fun one to do is askChatGPT for case studies of
(25:33):
something, and it will bring upfictitious case studies that
just don't exist when you doresearch.
So I'm curious as to how thatfunctionality works, and is this
also part of the site scan ornot as well?
Speaker 3 (25:43):
Yes, not part of the
site scan but, yeah, standalone
feature when you're doing acontent scan.
So the primary sort of featurethat people use the tool for is
they get a piece of content fromsomebody.
They then need to make surethat content meets certain
standards.
They put it into the contentscanning app with an originality
and they choose what scansthey're going to run on that
piece of content AI, plagiarism,fact checking, readability,
(26:04):
grammar, spelling, etc.
On the fact checking app, wecall it a fact checking eight.
So we would love it to beaccurate enough to be called
like a fact-checker, like thesource of truth.
But it's an LLM powered app aswell and it is prone to its own
hallucinations.
But what it does is it offers asan aid to help an editor
(26:26):
fact-check far more efficiently,and so it will identify all
facts, which it does quiteaccurately.
It will then understand thecontext of the article, go out,
do a deep search on the web totry and identify what the sort
of the common truth is aboutthat piece of information and
then provide an answer as towhat it believes is factually
accurate or not and then providea truthfulness score and then
(26:50):
provide sources to go and digdeeper, provide a truthfulness
score and then provide sourcesto go and dig deeper, and so
again, we call it sort of an aid.
But it ends up with sort of aarticle that has every fact
highlighted green to red interms of the confidence that
it's factually accurate or notaccurate, and then a bunch of
helpful resources to go andverify whether that statement is
(27:11):
actually true or not.
Go and and verify whetherthat's that statement is
actually true or not.
Very accurate.
But it's like 85 accurate, notnearly high enough to be like
fact-checking green done.
I don't need to.
It doesn't.
Speaker 1 (27:25):
It doesn't absolve
the editor of their
fact-checking job yet right, butif there was, like I said, data
that was sourced with no linkfrom a guest blogger, it would
be able to see that there was afact there, give it a green or
red light and then provide asource.
So if I wanted to check it andfind that source of that link,
it would actually provide methat information.
Is that correct?
Speaker 3 (27:46):
Exactly Exactly.
The process that you describedis what editors need to do when
they're fact checking and andthe tool is built to make that
process as as efficient aspossible for them and it does
exactly exactly what you said.
It will provide link and thensome, it can provide some pretty
magical sort of comments where,like, the boiling point of
water is 80 degrees celsius,it's like, well, that's wrong.
(28:08):
But if the context of thearticle is that boiling point of
water at a certain elevation,on a hike, in whatever camp at
Everest, and then it will say,okay, here's the elevation of
that camp, here's the boilingpoint of water at that elevation
, that therefore, boiling pointof water being 80 degrees
Celsius, might actually be true.
Speaker 1 (28:28):
Right, okay, very
cool.
So I think hopefully thelisteners and viewers can see
how using it's almost like AIfor good.
Right, it's using AI to improvethe quality of your content.
And, john, I just I wanted toend and I wanted to ask you
where originally ai and whereyou think AI is going from here.
Necessarily about the AI whichreally permeates everything we
do, but it's more about, well,if the AI content detector
(29:00):
detects it as smelling like it'sheavily AI written, it's going
to be poor quality, that is notgoing to resonate with readers
because it sounds robotic, andthat's the conclusion I've come
to and that's why I'm still afan of AI detection tools.
For that reason, is it humansounding right?
And I think that we are nowgetting used to seeing more and
more AI generated content to thepoint where, if I see a
paragraph that starts with inthese days of something comma, I
(29:22):
know it came from chat, gpt,right, and I think we're all
getting in tune to that.
So, at the end of the day, doyou find a lot of people, a lot
of businesses, are leveraging AIdetection in that way of saying
it's just a part of qualitycontrol now, or are people still
, religiously, we need to.
You know, anything that's over,you know, even if it's a 10%
confidence, we need to revise it.
(29:43):
Where do you see this now?
And then, you know, goingforward.
Where do you see this going?
Speaker 3 (29:48):
Yeah, so wide
spectrum of some people
operating off of a hard number.
Because that's, you know,that's how we've worked in sort
of the world of contentmarketing where it's like
plagiarism scan of X, plagiarismscore of X, readability score
here you know, contentoptimization score above this
threshold, and so sort of aclean score.
Integrating into that workflowis so seamless that there's
(30:10):
still people that want to dothat and I understand why.
So we see a lot of people stilldoing that.
It's not our favorite use case.
I think the optimal use case isthat it's editor and a writer
relationship and the AI score isused to help ensure that there
is maintains a human in the loopfor every piece of content that
(30:32):
is being published and that thequality stays high.
And if there is an AI scoreabove 50% so the likely AI then
that editor and writer can worktogether.
We have a free Chrome extensionthat takes all the character by
character creation history of aGoogle document and visualizes
that writing process, and soit's meant as a sort of it's a
free tool, but meant as ifthere's ever a potential false
(30:54):
positive.
The editor and the writer,teacher and student can go in
and look at that creationprocess and then really
understand what was done in thecreation of that document.
So I think there's gonna bemore.
My hope is that it's gonna be acontinued sort of progress
towards the use of detectiontools are to help people
(31:14):
understand was a human involvedin that creation process and or
was it?
Was it sort of one click andcopying and pasted to chat gpt
in seven seconds?
And you know, I think a lot ofpeople are happy to pay a writer
a hundred dollars, thousanddollars, but if the rate may
might be not very happy to haveit copied and pasted there to
chat GPT in seven seconds.
Speaker 1 (31:33):
Yeah, indeed.
So, would you say.
Then we're sort of moving froma world of AI is evil we need to
detect AI content and rid it ofour systems completely to a
world of quality control, ofunderstanding holistically where
the content's coming from andjust using tools like
originalityai, just as anotherway of controlling the quality
(31:53):
of that content in light of theuse of AI Is.
That sort of the world we're atnow and then you know where do
you see this going and where doyou see originalityai going in
the next several months, a yearor two?
Speaker 3 (32:03):
Yeah, so some are and
, for reasons that may be valid
or may not be valid, are sort oftaking a stance of no way on my
on my site period.
My writers are not allowed touse ai at all, ever full stop,
and they might have businessreasons different than ours that
pushes them to that outcome.
And so we do have some peoplethat are saying no ai period and
(32:25):
are holding holding a strongline using detection tools.
Most people are moving to thekind of the world that I think
think we sort of are bothtalking about, where it's a,
it's used as part of an overallquality quality control step,
but it's only one of multiplesignals, and that's the
direction that originality isheading as well is continuing to
build out features that helpeditors do their job.
And so what do editors and copyeditors need to do?
(32:46):
Ai or not?
Plagiarism, fact checking,readability scores, grammar and
spelling, editorial guidelinecompliance.
So how close does this articleline up with the editorial
guidelines for our site and isthis article going to perform
well in the search results?
And those are some featuresthat are coming down, that are
currently under development,that I'm pretty excited about,
(33:08):
sort of how we're approachingthose problems in the world of
publishing content and so again,sort of the end state is it'd
be crazy to publish content andnot have run it through
originality to sort of make sureit meets your standards, make
sure it's going to comply withyour guidelines and make sure
it's going to perform as well aspossible in search results.
Speaker 1 (33:29):
Well, very cool.
I'm looking forward to thatbecause I think this, when you
talk about make sure it'scompliant to editorial
guidelines, that'd be very coolto, because we know it's capable
, with AI, of having thisconcept of a tone or a brand
voice and making sure that thecontent written is actually
aligned with that, compliantwith that, and that'd be very
cool too.
I don't know of any other toolthat has that right now.
You know ChatGPD has.
(33:50):
Well, what tone would you likeme to take?
What have you?
But, from a large enterprisebrand perspective, being able to
do the quality check of thattone, of that voice would be
very cool, so really excited.
And yeah, you know, in allhonesty, john, before this
interview, I thought oforiginallyai as another one of
those AI detection tools.
But obviously that contentmarketing background I can see
how marketers, you know, shouldvery much be interested in what
(34:11):
you're developing and I'mdefinitely going to check out
that site scan and get the tipsfrom my own website.
So, john, just to close off,anything that you wanted to talk
about vis-a-vis AI content,google content quality that we
didn't talk about I want to giveyou the floor.
But also, for those that areinterested, what should be their
next steps if they want tocheck out originalityai.
Speaker 3 (34:31):
Yeah, no, I think we
covered everything really well.
Kind of great fun, fun chat.
Yeah, if they want to check outoriginalityai, head over to
originalityai with free featureor free tool for almost each one
of our features that we'vetalked about, and then, if you
have any questions for myself,you can reach me at jon at
originalityai or find me onLinkedIn.
Well, or find me on LinkedIn.
Speaker 1 (34:49):
Well, there you go.
Very cool, john.
Thank you for your openness.
Thank you for sharing sort ofyou know.
A lot of people don't know whatthey don't know and they only
know about AI what they see inthe media.
But, hearing it from a companythat's developed AI detection
tools, I think this has beeninvaluable.
I'll definitely be checking outOriginal AI.
Being a Copyscape user myself,I can see I'm only getting a
part of the picture when itcomes to quality control my own
(35:11):
content.
So, very cool.
Thank you so much, john, and Ihope you all enjoyed this
episode.
Feel free to reach out to Johnand, hey, I'd love to hear where
you are in terms of AIdetection.
So reach out to John, reach outto myself.
Love to hear from you.
Let's keep this conversationgoing, because it really is a
critically importantconversation going forward, john
, thanks once again and I hopeto keep in touch.
(35:32):
Awesome.
Speaker 3 (35:33):
Thanks.
Speaker 1 (35:33):
Neil, I really hope
you enjoyed the interview.
There aren't that many peoplethat I can bring in who, with
some sort of authority becausethey have a tool and they've
been testing these things andseeing the results can really
give us this really solid adviceon AI-generated content and
search engine rankings what haveyou?
It's something we all have todeal with, and hey, and search
engine rankings what have you?
It's something we all have todeal with.
And hey, if you are looking formore help with your AI
(35:57):
marketing or content marketingor blogging or social media or
any of that anything that wediscuss in this podcast I cover
weekly in my Zoom calls in myDigital First group coaching
community.
We also have a Facebook group,but really it's the weekly calls
which help hold you accountable, get your questions answered
and always learn a few thingsfrom what other entrepreneurs,
(36:21):
startups, business owners aretackling.
It is really the best way thatI can help you, above and beyond
this podcast.
So, if that interests you, bythe way, everybody who is a
member, every 90 days they get afree one-on-one 30-minute
coaching call with myself aswell.
The investment is less than$100 a month.
Check it out, go toneilschafercom, slash membership
(36:42):
and that's it.
Thank you again for joining meon this journey, which continues
, and if you ever thought we'drun out of topics to talk about
vis-a-vis digital marketing man,there is so much to talk about.
I already have interviews linedup for the next well, six
months really.
So lots of good stuff to come.
Make sure you hit thatsubscribe button, stay tuned.
And that's it for anotherhopefully you agree exciting
(37:04):
episode of your DigitalMarketing Coach podcast.
This is your Digital MarketingCoach, neil Schafer, signing off
.
Speaker 2 (37:13):
You've been listening
to your digital marketing coach
.
Questions, comments, requests,links.
Go to podcastnealschaefercom.
Get the show notes to this and200 plus podcast episodes at
nealschaefercom to tap into the400 plus blog posts that Neal
has published to support yourbusiness.
While you're there, check outNeil's digital first group
(37:36):
coaching membership community Ifyou or your business needs a
little helping hand.
See you next time on yourdigital marketing coach.