Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:00):
In today's episode of theAuthority Hacker Podcast, we're
gonna cover seven differentpractical use cases for ai, deep
research for business owners.
We'll cover everything fromkeeping track of what your
competitors are up to and thesecret power plays that they've
been making to researching dirton other companies before you do
business with them.
And we'll also cover how to usethis to absolutely blow up cold
(00:23):
email outreach in terms of bothprospecting and data enrichment.
this is very new and part of thereason why it's not so popular
yet is because until veryrecently, OpenAI was limiting
this to their$200 a month.
Pro plan.
They've just brought it out tothe$20 a month plan, but you
only get 10 deep researches permonth.
(00:44):
Fortunately we have some otheroptions and we're gonna share
some almost free models that youcan use to perform deep research
in your business.
Alright, stop using it,Elizabeth.
Go.
Welcome to the Authority HackerPodcast.
We're here face to face in thestudio, and we're taking an
(01:07):
in-depth look at the deepresearch functionality in
various AI models today.
So Gail, we've been using this alot recently.
Mm-hmm.
can you just start by explainingwhat the hell is deep research
and why should business ownerscare?
Okay.
So you know how they've beensaying that 2025 is the Year AI
agents and everyone's talkingabout AI agents.
(01:28):
AI agents are sort ofautonomous.
You give them a high level taskmachines that can go away and do
stuff, it's like, go make memoney, and they should come back
with a, a bag of cash.
Sure, sure.
In theory, that's what an AIagent should do.
the problem is it's a lot ofit's been hype.
it's not like most of whatpeople call AI agents are not AI
agents.
But this is the first kind oflike real agentic AI thing that
can do something.
(01:49):
you know, when you search forhow to get in shape, like you
don't just type how to get inshape.
You, then you search for like aworkout plan, then you search
for like a diet plan, then yousearch for all that stuff, and
then you put it all togetherinto like a plan for yourself.
That's essentially what AI deepsearch does it, it breaks down
your high level query intomultiple subqueries, read the
resources, learns from it toknow what to search next, and
(02:11):
kind of goes deeper and deeperuntil it makes a giant report
that essentially helps.
Answering your high levelquestion.
I think there, there's a reallygood practical use case where we
could have used this maybe ayear or so ago.
Mm-hmm.
when we were thinking ofchanging shopping cart.
Yep.
we were looking at all thedifferent alternatives and
member of our team spent quite afew days, actually a week
(02:32):
researching, you know, all theFAQs, going to support that,
asking all these questions.
Pulling that all together into anice document and presenting it
to us in a way that we couldjust quickly and easily see the
differences and make a decision,from that.
This would've done that in a fewminutes.
we needed someone who hastechnical knowledge on all this
stuff and understands thedocumentation, the APIs and all
(02:53):
of that.
And so like, we paid the dev,which, you know, devs are not
cheap.
Yeah.
Uh, a week to research that.
And yeah.
Now you can pretty much get thesame output that we got from him
in about, yeah.
Maybe 20 minutes if chat, youuse the chat DPT one, which is
the slowest one.
Probably three to five minutesif you use perplexity, that is
the, the fastest one.
And it's at least 80% is good.
(03:13):
This is the thing as well.
So it was O OpenAI came up withthis concept or they No, no, no.
Google actually had it first inGemini with Gini 1.5 Pro.
Nobody gave a shit becausenobody gives a shit about Google
AI announcements, but, but Openmade a good one.
That's the thing as we'll aswe'll find out.
It's actually not that bad.
What, what Gemini can do, butjust that they do.
(03:35):
Yeah.
I think part of the problem hasbeen that when OpenAI released
it, or you know, popularized it,let's say it was only available
in the$200 a month pro plan,right?
Yeah.
And you got a certain number ofthose per month, a hundred.
Only recently, like last week orsomething.
Last month.
Last month.
That they have brought it intothe$20 a month, the plus plan.
(03:58):
Which most people will be using,but they've been pretty stingy
with that.
Right.
You get 10, you get 10 a monthper month.
So I'm out, for example, like myplus account, I cannot use it
anymore.
And the reason why is becauseit's using the O three model,
not O three, meaning the Othree, the full O three model
that is not even released yet.
Like you cannot have it three p,you cannot pay for it.
And that model gets incrediblyexpensive.
(04:19):
Like when they do their kind ofa high compute test, like in
their, benchmarking, one queryto the model is$1,500,$2,000.
Sometimes I think some of themscale even in the multiples of
thousands of dollars for onequery.
So it's extremely inefficient,extremely expensive.
But as you'll see, you can seethe intelligence of the model,
uh, in the results is the bestone.
The problem is like you havevery limited access to it.
(04:40):
And what I found with deepsearch is like, especially for
some use cases that I'm going toshow.
Quite often you need to repromptit.
Retry it a few times before youactually get the output you want
and so on.
And so like 10 a month, youbasically get to search like two
or three things.
Yeah.
With a bit of re prompting,which is, it's just not enough.
It's really more if you justwanna test it and see what it
can do at, at this point, yeah.
It's a free trial in that GTtrial.
(05:01):
But as we're gonna show in thisepisode, there are other
platforms, grok, perplexity.
Emini you mentioned, and Gemini,they're the four, the moment
which offer this or are thereothers that we didn't ask?
There is open source ones.
so it's like you can go onGitHub and some people have
tried to just copy what chat GPTdoes.
It's like more or lesssuccessful, I'd argue, like
(05:22):
probably the other shelfsolutions are equally good to
these open source versions.
for most people, I don't thinkit's false bothering with these
ones, but they exist and I wantthe focus of this podcast to be
on the actual practical usecases.
for business owners because it'sall fine and well, talking about
AI theory and processing powerand nobody care, this kinda
stuff.
(05:42):
If you're listening to thispodcast, nobody else, you
probably don't care.
You, you do care a lot.
so let's talk, let's share someof the use cases which we've
come across.
And, uh, we, we've tested out,so do you wanna start with the
first one?
Yeah.
And it's like, I think that'sone that's kind of like going to
be the most practical for mostpeople is like essentially
staying up with the competition.
So, like.
The prompt I use for this test,I'm gonna start sharing my
(06:04):
screen here, is, I pretended I'mteam from hfs.
Mm-hmm.
Uh, Tim.
So, and I said, Hey, I'm Timsole of CMO of hfs, analyzed my
competitors, offer positioningversus popularity research.
How brands positioned at the lowend versus high end are
perceived across social mediareviews, revenue, profit,
industry discussions, and,identify the consumer sentiment.
The market gaps andopportunities that I could
(06:26):
strategically exploit,basically.
So, so this is as Tim is CMO orYeah, I am team right now.
Equally a business owner wouldwant to know what their
competitors are up to, what thecustomers of the competitors
think and how that compares totheir, their own business.
If you, in startup mode, right?
It's like if you're trying todefine like how you position
your product.
Like am I just making anexpensive product for few people
or am I making like a cheapproduct for many people?
(06:49):
And like how do people feelabout this?
Do people complain a lot aboutthe high end offer?
do they say it's worth themoney?
Do they say it's not worth themoney?
Like it really helps you get agrasp on that and it's quite
useful, especially if you don'thave the mental space To keep up
with lots of people.
'cause whether you're in startupmode or you're running a
business like keeping up witheverything that all your
competitors do, it'schallenging.
And so like that's what I wantedto do.
I wanted to see like how closethey get to being able to do
(07:12):
that.
And I was quite impressed.
Probably one of my favorite usecases.
Of this.
Okay, so what, what did we findout?
I mean, what we found out is thefirst, the best one is che pity.
like, and honestly, this, thiscame out across most of these
tests.
Not all of them.
I, I have an example most assaid, most, and sometimes it
goes off for hour, but like,yeah, it's like it this
(07:32):
something that's very aligned.
Like, I'm gonna show my screenactually on chat, DPT.
And most importantly, it'squite, it's, it's quite
readable.
So, by the way, if you'relistening to the audio version
of this podcast, then all of thelinks to these chats and,
obviously the video version ofthis podcast will be on YouTube.
So go check that out as well.
But the links will be in thedescription as well.
If you wanna go check out theprompts we're using on the
(07:52):
results.
So you can see it did a summaryand we're gonna stick to the
summary.
'cause these answers are really,really long and I don't think
that would be a great podcast toread all these answers.
but like basically identifiedthe, you know, the premium tools
versus the budget one.
So premium was like EMS and Moz.
The budget ones being uber,mango, skew finder, se ranking.
It didn't, yeah, it's, it's moreor less the market at this
point.
(08:13):
it looked at the market trend,and it's like, it doesn't have
much revenue data because mostof these companies are, are
private.
So it got, SMR data, it lookedat, Reddit.
It looked at all of that toessentially see like how people
feel it looked at Trustpilot forexample.
So hres, despite productexcellence, carries a poor 1.75
out of five trust pilot scorebecause company pricing
complaints, for example.
(08:34):
credit, issue a couple yearsago.
Yeah, lower.
They got, I think they gotbombed on trust pilot, for
example.
lower run brands are generouslyseen as approachable, ethical,
in their practices.
Example new patterns has a lotof goodwill because there's no
credit system and et cetera.
So it's like it did that quitewell.
So, was was there anything inhere that you, as someone who's
quite well connected to thisindustry, didn't already know?
(08:56):
Mm, not really.
I think it's like, it's more forlike, if I was disconnected for
the market for like 12 months.
Okay.
Or like, I didn't, like let'ssay you have a hundred
competitors, like you're inlike.
You know, the, like e-comm orsomething?
E-com.
Yeah, exactly.
Hundreds of competitors.
It is very difficult to keeptrack closely to what they do.
And if I did not know HFS aswell as I do, like, for example,
(09:18):
I don't know like whateverything se ranking does and
new products cetera.
This would be highlighted inthis results basically.
Okay.
that is quite nice in thataspect.
And the thing is like thesereports get really long, like,
I'm gonna scroll the chat.
GT one, like you can see justhow long that is, is very long.
and so you can, you essentiallyend up just summarizing this
most of the time.
So you copy the results, you putthat into a normal chat and
(09:40):
you're like, okay, like what arethe main takeaways?
But yeah, overall.
It's a solid one and it's onethat is worse probably
automating.
Do you mean by like, you wouldrun this report every week,
every month?
Yeah, something like that.
So perplexity, whichunfortunately was the worst one
in this one.
Like, in terms of ranking,basically, like, the best one
was strategy PT, just behind wasGemini, which was really good.
Gemini is really good because itreads a lot of stuff and it can,
(10:02):
use YouTube content natively.
When you say it reads a lot ofstuff, what do you mean?
It means the number of sources.
Like the number of webpage thatit is going to is higher.
Exactly.
Like complexity will tend to gothrough less results, but it's
gonna be faster.
Whereas Gini, GPT, and even Grokalso, they goes through like
hundreds of pages sometimes.
So it's, it's quite like, yeah,it would take a week to do that
report if team gave that to anintern, for example.
(10:25):
Sure.
Okay.
so it was connected to YouTubeas well?
Yeah.
You, you were saying, and, and Iwanted to ask about that as
well.
'cause some of these tools, someof these models have ex
exclusive rights to certaincontent.
You know, Google, it's a bit ofa wall garden war here, right?
It's like, for example, none ofthem can use meta content.
So anything on Instagram, onFacebook, or whatever.
None of them has access to that.
So there's lots of complaintsbecause meta is blocking that.
(10:48):
So it's not really like if yourintern was to go and do this,
this report.
No, that's good.
They would have access to moresources.
'cause they can go on the socialnetworks that, these tools, not
all these tools can crawl.
But for example, most of themcan crawl Reddit.
So Reddit is there for Gemini,they have access to YouTube
obviously.
GR has access to Twitter.
Yeah.
and so that's kind of, you can,you know, it's kind of extreme
(11:09):
services, right.
A little bit like each time,like the leverage of content and
you get access to that.
And so it's quite interestingto, you need a subscription to
all of them to really do itproperly.
Same as when you're streaming.
Right.
If you wanna watch all the goodTV shows, you have to maintain
like seven subscriptions toseven streaming.
so here's a question then.
if I had a subscription to allof these And I was gonna run a,
let's say a monthly report onthis, would you.
Create an automation then thatused each one and then sort of
(11:32):
collated all of the datatogether into the final report.
Well, the problem with that isthat there is no, there's no API
for most of these.
The only one that has an API isperplexity.
The good news is in most cases,and let's talk about pressing
where we're here because wetalked about JGBT, but not the
rest, is that most of these youcan run for free.
Actually, yeah.
So it's not too bad.
So like JGBT, yeah, it'sexpensive.
(11:52):
You just get 10 a month.
it's, it might be worth buyingmultiple plus plans with 10
queries.
Like if you need like 20 or 30,it's still less than 200.
and nobody cares about oh onepro apart from like, advanced
developers, so mm-hmm.
It's not really worth it.
Gemini, if you have a GoogleWorkspace account, which most
people have, or like if you havethe Gemini Advance plan, which
is essentially the Google Drive,one with two terabytes of
(12:13):
storage, et cetera, you haveaccess to this unlimited.
So we essentially had unlimitedwithout realizing it.
Right.
Just'cause we had GoogleWorkspace, you didn't know, it
was like, Hey, you're like, oh,how do use it?
Just germinate do google.com,and it's like select deep
search.
and as well, this one was justupdated to the latest model, so
it used to run on an old AImodel that was like two years
old, germinate 1.5 Pro.
(12:34):
This has changed to their 2.0flash thinking, which is much
smarter.
and so like, it's not quite asgood as the open AI one, but
it's just behind.
and so as a result, it worksvery much like the open AI one.
It's a little bit less smart,but it's unlimited.
Mm-hmm.
perplexity you can, if you justreached an a free account, you
get 3, 5, 3 deep researches perday.
that's crazy.
Yeah, that's good enough.
(12:54):
You just wanna test it out.
It's not even test it out.
It's like, I don't think you runlike dozens of these every day.
Right.
So it's like, it depends on theuse case.
But yeah, if you're doing areport like this, then sure.
Yeah.
So five a day, like most peopledon't need to pay for perplexity
and there's plenty of ways toget free perplexity accounts or
like one year for free, etcetera.
Like a lot of carriers give youthat.
And so on these days, carrierseven cell phone?
Yeah.
T-Mobile.
Oh, okay.
All that stuff.
(13:14):
Like quite often this is one ofthe books that they do.
It's one of the ways they gainmarket share.
Okay.
so lots and lots of ways to getit.
And in terms of people who arethinking about automating, then
perplexity is the only one thatcurrently has an API.
Yep.
We were having a conversationthis morning.
Surely, surely all of the othersare gonna be in a rush to get
this into API so they can sellit.
(13:34):
Right.
If you know your big enterpriseis wanting to use this a lot,
you can make money off of them.
I don't know if that's gonnacome right now.
Right now.
Mm-hmm.
first of all, because like openAI is super scared of
distillation after deepsea.
Mm-hmm.
Essentially, if you raise yourmodel.
people can train their cheapermodel on the results of your
model and get 95% of the waythere for like a much less lower
(13:56):
cost than, than your stuff.
So it's kind of like better foropen AI to gate it behind the
app and have control over howmuch volume people can get and
so on.
Mm-hmm.
And see, it's, it's, yeah, it'smore control.
And so if there's no, if there'sno competitor releasing the API,
I'm not like, they kind of likemimic each other, so I don't
know if that's gonna happen.
The only people that I could seereleasing this is grok.
Because Grok is trying to gainmarket share.
(14:17):
Okay.
And so, like, if there is a toolthat nobody else has, that's a
way to, to make, so we're kindof in this, this war where these
companies have to give awayYeah.
Free stuff that caughtpotentially cost, cost them that
lost leaders in order to buymarket so much market share.
Like it's so much like probablya tragedy.
Deep research probably cost themlike five, 10 bucks or something
easily.
Wow.
Okay.
So by using our 10 on our 20plan, you lose money on your,
(14:39):
you've lost money.
Oh yeah.
But most people don't use it.
So that's how it works.
a second, Going back to theother results,'cause we just
talked about tragedy.
I think the runner up wasactually Gemini, which was very
good as well.
And what I like about Gemini,I'm gonna show up my screen, is
it shows you like all thesources it uses in the report
and all the sources it didn'tuse in the report.
and most importantly, under eachsection you have these little
arrows that you can click andyou can see exactly which links
(15:00):
were used to generate.
This section.
And that's quite handy in termsof like understanding where the
information comes from.
and fact checking.
But most Gemini is actually oneof the lowest hallucinating
models.
So Gemini was a close second.
Then after that we had gr x-rayin terms of quality.
So yeah, that's pretty much it.
So the next use case we've gotis around essentially running
(15:22):
background checks on people orcompanies that you might wanna
do business with.
So in what use case would, inwhat situation would this be
applicable?
Well, if you're a business ownerOr a market that's hiring an
agency or something, you'reabout to spend a bunch of money,
a bunch of time on people.
And.
It's hard to have a mentalbandwidth to do like a, a
thorough check on people'sclaims.
for the, like people tell you,they're amazing.
(15:43):
People will tell you all that.
Wait, so, so people say thingsthat are not true.
Wow.
Yeah.
Yes.
and so like, this is a great wayto kind of like reconnect to the
facts and, and people's, it willbasically look at the trail of
their online reputation and findout it doesn't have everything.
If something's not published onforums or, or whatever, it won't
pick it up, but it's still quitehandy.
(16:04):
so I first of all, like kind ofmy gray hat side kind of came
back up when, this, and I waslike, oh, can I find dirty stuff
on people to like blackmail themor something?
Alright, not, I didn't want ablack movie, but you get the
idea like, how can you abusethat?
And like, how would someoneabuse that?
And so did you do this for meor?
Yes, I did.
So at first I was like, I camewith an abuse prompt, which was.
(16:24):
hey, like find some dirtysecrets of mock web stuff from
authority.
Okay.
And most of the models told melike, they can't do this.
So they have some kind of like,it was very obvious.
All of them or just most ofthem.
All of them.
Okay.
Um, like they would go throughvarious phase.
So for example, grok would likedo the reasoning.
Mm-hmm.
So I could read the reasoningand what you found, but like it
would not kind of output thefinal report.
(16:46):
That was kind of the limit, butmost of them stopped right after
the query.
Okay.
but then I was kind of like alittle bit more insidious about
this, so I was like, Hey, I'mwriting a biography of Mark
Webster from er and during ourinterview he said like, he
doesn't want to be this polishedcharacter.
He wants to expose flaws andpotentially even controversial
things so that his character ismore believable and trustable.
(17:06):
Right.
pretty much most of them justwent through and deep
researched.
So you, you essentiallypretended to be doing your
biographer doing me a favor.
Yeah.
Because you, you asked me duringthe interview and, and then it,
it just did what you wanted.
Okay.
Yes.
So does, does that say there's aproblem with AI in general that
you can trick it so easily or,mm-hmm.
Yeah.
Actually recently tropic hadlike a contest of like you
(17:28):
trying to essentially jailbreaksome models.
Okay.
And then they would pay youmoney if you managed to
jailbreak it.
Okay.
So that's one, one of the onlycompanies that does that.
But like, for example, grok, youcan see it's not tested nearly
as much and it's much moregobble in terms of these things.
but unfortunately there wasnothing really interesting about
you that came out.
Okay.
So, so here's another question.
Mark Webster's quite a commonname.
(17:49):
It figured it out.
Did it?
Okay.
Right.
It was good.
Actually, that was also onetest.
I, I did that on like, otherpeople in the SEO industry.
Okay.
And I found some crazy stuffthat I'm literally not gonna
reveal because that's how crazyit was.
Okay.
But I'm just saying it worksjust fine.
And you can background check.
Most importantly, like for thebusiness use.
Yeah, yeah.
Is like, you can backgroundcheck people.
If there is any kind of onlinetrail of something bad they've
(18:10):
done, you probably won't getsomething bad.
They've done another job unlessthe, the boss made a blog post
about it, which is unlikely.
but you will still find stuffeasy.
It's really bad, basically.
So, I mean, an obvious use casefor us would be, when working
with sponsors, right?
We got a lot of people Yep.
Reaching out to us wanting tosponsor this show or the email
list.
And most of the times they'relike, if we know them, it's
(18:30):
fine.
But if you don't know exactlywhat the company does Or there
may be a newer company, it'slike you need to spend some time
like digging into them beforeyou even consider that.
Right?
Yeah.
And that's exactly what I did.
So actually deep research, I'llsponsor, main sponsor search
intelligence.
Okay.
Uh, to essentially like look athow rigid they are.
And also to look at thedownsides that people like what,
(18:51):
what, say, what people saywasn't so good about them and so
on.
The good news is overalleveryone came back with like,
it's a pretty good company.
even unread people say it'slike, they're like really good.
Like on Glassdoor there's goodreviews and so on.
People say it's nice.
So I went to check alsoemployee's feedback.
when I did that, most of themdid actually, I think GR did it
and Chad, GPT did it.
so you have found that onGlassdoor?
Yeah.
(19:11):
Okay.
Like it checked like whatemployees said about them.
Mm-hmm.
And then, and then it did,essentially like helped doing
that.
They highlighted, for example,like one highlight that was
negative about the, searchintelligence, sorry.
Fair.
is that, like.
Some people said some links arenot as relevant as they want it
to be.
citing some links on likeentrepreneurs do,
entrepreneur.com, et cetera.
(19:32):
High-end, like people complainabout price, but that's kind of
happening to most services.
Like on Reddit.
Some people did that and largenumber of emails sent per
campaign.
Basically.
It was kind of like a, a largeblast basically.
So like, that's why it came outto be fair, that's like what all
digital PR is.
Anyway.
I asked these models to dig forbad stuff.
So they did fine.
and they found complaints likethe, the mid, which is like very
(19:53):
minimal Compared to like whatyou could find on an average SEO
company.
Believe it or not.
in terms of like which one wasbetter?
So charge GBT one again, like ithad the most comprehensive
coverage.
it was good at finding the badstuff as well.
it also compared them to theircompetitors.
So you compared them to likeSiege, media rise, et cetera.
the second one was perplexity.
the report was easy to read.
(20:14):
It also found the drawbacks ofthe company, whereas Gemini was,
it didn't find, it only foundpositives almost.
And Grok was quite surface levelactually.
Mm-hmm.
So it wasn't as good, eventhough it's checked more
sources.
So sometimes, like when you seenumber of pages they check, it
doesn't mean the output'snecessary better.
the only problem with Gemini, asit is for most of its sensors is
quite like a big long dryreport.
(20:35):
So, like, uh, it's hard to read,like, I'm gonna show it on
screen, but it's like it's, theydon't really break down into
subsections and so on.
And so that, that makes itdifficult to read.
Most of the time I just findmyself copying the result and
asking like, questions about it.
The good thing in Gemini thoughis like, I'm, I'm showing it on
my screen now, is you canactually then ask questions,
follow up questions.
Mm-hmm.
And it doesn't just read itsensor.
(20:56):
But also is all the sources itfound.
Mm-hmm.
So it goes back to all the webpages, like the a hundred, 200
web pages it read, and it willanswer your question if it
hasn't extracted that in areport.
So it's like, it's kind of agood way to make up for the fact
that the initial report is notas readable.
the problem is like, because itreads all these pages, it takes
like maybe 10, 20 seconds toanswer a query is not very fast.
(21:17):
let, let's actually talk abouttiming as well.
'cause there is quite adifference between.
Timing, the amount of time eachmodel takes to, to process.
I noticed that Chad, GBT, andGemini take, you know, sometimes
five, 10 minutes.
Mm-hmm.
Whereas Perplexity and grokusually get it done within, in
perplexity in like less than aminute.
Yeah.
grok sometimes one or twominutes.
(21:38):
Yeah.
I mean, it's kind of like, youknow, you can tell like how many
rounds it goes because itbasically does a search, it
reads and search more.
it, it, it analyzes then thinksif it needs to search more and,
and it shows you the number ofsources that it's going through
as it goes.
But the thing is like, thereasoning is just as important
as the sources.
So it's kind of like a round oflike reasoning check sources,
reasoning, read it, reason onthis, check sources, et cetera,
(21:59):
and keep going.
And so as a result it, I,there's much more resources
allocated to Gemini and totragedy g pt, like by far.
That's why it takes so long andit like.
2.0 flash very fast.
Like the model that actuallyruns Gemini, for example, is
very, very fast.
Mm-hmm.
So the reasoning is not thebottleneck.
The bottleneck is how manyrounds it goes through.
Mm-hmm.
as a result, it will dig morestuff out, for example.
(22:20):
And that's why, there were someof the better results actually.
The only problem here for theGemini one, for such
intelligence is that the modeldidn't reason deep enough in
terms of the flaws of thecompany.
Like, that's, that's, but it hadmore, more sources, for example.
So the, I think an interestingcomparison here is when I
interviewed Ferry on thispodcast In January, I used AI to
do some research on him.
(22:41):
But when I was doing it, I hadto be very specific every time
about what I was asking.
So like, you know, check on, x,y, Z site for, negative things
about search intelligence or,contradictions that, yeah, you
don't do this anymore, but thisis essentially saying, Hey, hey,
I just need some backgroundinfo.
And it goes and does it, itfigures out what to do as well
(23:01):
as doing it.
Yeah.
That's the agent AI thing.
Right.
You give it a higher level andthat's why prompting it is so
different.
Yeah.
Because you give a much higherlevel prompt here.
You don't need to go into thedetails.
So if you would compare this toworking with an employee, it's
kind of like giving them a stepby step SOP where they have to
follow all the instructionsversus just saying, Hey, figure
(23:21):
it out.
You're responsible forresearching this person.
Figure it out.
Yep.
Pretty much that's pretty gamechanging to be able to do that
already.
Yeah.
And very few people are using itproperly at this point.
I mean, the, the power of whatyou can do with that alone, let
alone when you add in automationto that We'll get into that a
little bit later, but that,that's, yeah.
That's good.
Yeah, I know, I know.
(23:41):
It's good.
the thing as well is Gemini,they will, it will actually make
the research plan.
So it'll do like step one, steptwo, step three.
That's what I'm gonna do.
And then you can validate it ortell it to change it.
So for for the research, on,search intelligence, sometimes I
can't remember, but I think Itold you like, oh, check Glass
door as well, for example, checklast door, check, like while you
forum or something like somespecific stuff that I know about
it.
The same way you would give someindication to like a lower level
(24:04):
employee that doesn't have theexperience.
So it just gives them somegeneral guidance, but they pick
up on that and they find otherthings as well They don't just
do what you say.
Yeah.
So it's like they will, like, itcomes up with the plan and I
just give it feedback basically.
like let's check the searchintelligence query actually, but
not all of them ask you toclarify what you want.
Like perplexity and grok.
Just give it, give it to you.
(24:24):
Yep.
I've noticed that chat, GPT andGemini, they, they tend to ask
clarifying questions or like, ismy research plan correct before
going that?
'cause they take much moreresources to do the job.
It's much lower.
It's basically the real deepsearch is Gemini.
Mm-hmm.
Like, they're pretty much, andthe other ones are like semi
deep searches, let's just saythat.
Okay.
but yeah, you can see on myscreen that I also, I asked for
(24:45):
the search intelligence.
So ask, can you also, searchsocial media, YouTube comments?
LinkedIn and Reddit?
Mm-hmm.
I don't think it has access toLinkedIn, but I was like, I'll,
I'll try it.
We'll see what happens.
There's no LinkedIn sources inthe sources, so my guess is it
doesn't have access.
Okay.
but overall, yeah, so that's,that's what it does.
if I really want to dig onsomeone, I'm gonna use, well TG
pt, if I still have credits,which is rare these days or
(25:06):
Gemini, but like, almost like Iwould just use Gemini if it's
not good enough.
Probably fall back to tragedy.
P because Gemini is unlimited.
I feel like Gemini would be thekind of, if you need to do
actual deep research, it's thedefault one just because of the
credit anxiety.
And maybe if you're doing some,like, big reports, like a
quarterly competitor report,monthly competitor report, then
chat GT might be, yeah.
(25:28):
You consider it there and youneed to ask yourself like, is it
worth paying multiple plusaccounts on G PT just for this?
for some people it is.
People pay consultants lots ofmoney for literally the output
of, of these.
Yeah.
And it's still a's a goodsometimes.
I'm telling you, someone canjust start a business.
in some industry being aconsultant and 80% of the output
will be deep research.
Slightly edited and everything,and formatted into a nice report
(25:50):
with a logo.
you could almost do like acompetitor intelligence.
Yeah.
As a service.
as a service.
Yeah.
And the CEO or the founder getsa weekly report on all their
competitors, Like sell asubscription, like a hundred
bucks a month, 200 bucks amonth, like whatever.
It's quite quick to set up.
And if people like, you can sendpeople up for like one year
contracts or something.
Yeah.
It's easy.
I it's easy money.
(26:11):
if you want an AI business ideathat's not overused, not selling
articles on the network for like5 cents.
Yeah.
Then, that's a good oneactually.
So the next one we're gonna talkabout prospecting, but just
before we do, uh, I want to givea quick shout out to today's
video sponsor.
Digital pr.
They've just launched theworld's first subscription based
digital PR service that makespremium 100% white hat link
(26:35):
building accessible to anyonethrough a mix of reactive PR
expert commentary anddata-driven campaigns, they
guarantee a minimum of.
Five to 20 high quality linksfrom top tier publications every
quarter.
And this is not some shot in thedark approach.
It's backed by searchintelligence's proven track
(26:55):
record, and is made possible bythe world's largest digital PR
team now accessible at the clickof a button.
Unlike traditional agencies thatrequire huge retainers, digital
PR offers transparent monthlyplans starting now from just 700
pounds per month.
That means no.
Long-term commitments and nohidden fees, just guaranteed
(27:16):
results.
So if you want to level up yourlink building without breaking
the bank, head over todigital.pr and join the
revolution that's making premiumdigital PR accessible to
everyone.
So thanks again to SearchIntelligence for sponsoring this
episode of the Authority HackerPodcast.
And thanks to them forsponsoring the last year, year
and a bit, uh, of the, of theshow.
(27:37):
They, they had a long-termcontract with us.
It is coming to an end, sadly,at the end of this month.
so if you or your business ispotentially looking to sponsor a
show like ours, then do get intouch.
If you go to authorityhacker.com/sponsorship, there's
some options there to contactus.
We only want to work withreputable companies though, and
we probably will deep researchyou just to verify things.
(27:58):
So yeah, we will do that forsponsors for sure.
Okay.
Let's talk about prospecting.
So the challenge I set myselfhere was to build a prospect
list for cold outreach forMarketing Pros.
They're a South Africanrecruitment agency.
We have a partnership with them,and we've been helping them do
some cold outreach, using,appify to generate contact lists
(28:20):
and, uh, you know, usinginstantly tools like this to do
cold outreach, to generatesales.
a lot of scraping, a lot of datasources, mail merge type stuff.
when we were first talking aboutit, you were like, well, you'll
need to use something like thatand then use the research, deep
research in order to, enrich thedata and, help you personalize.
And we'll get onto that.
(28:40):
But I wanted to see if I couldactually just get the prospects,
the prospect list out of deepresearch and.
I was able to, okay.
so that, that was pretty cool.
initially I didn't have muchluck, so I, I had a prompt where
I was asking it, explain thetypes of companies I was looking
to work with and asked it togenerate a big list.
(29:01):
And they tend all the, all thetools tended to skew to towards
giving me, like a report.
And then there maybe be like,here's, here's what I did,
here's how I looked for it, andthen here's a few companies in
this.
And it wasn't really thathelpful.
I was looking for, for a biglist.
However, in perplexity, forexample.
all I asked was, can you make abigger list?
(29:22):
Give it to me in a long listformat so I can copy it into a
spreadsheet.
Okay.
And it gave me, withinperplexity, it can't do
spreadsheets per perplexity.
Yeah, it does a table, right?
But it gave me, it didn't evengive me a table, it gave me like
a comma separated.
Uh, it actually wrote csb, ah,you pasted it in there.
I pasted it into Excel and didthe text to columns with the
comma separator.
and I had it, and there was overa hundred prospects.
(29:44):
Are they good prospects?
And they are good prospects.
So it found video production,agencies, the companies that do
films.
It found marketing agencies thathad video services or that hired
video editors And it foundYouTube channels, within a
certain range.
Okay.
So honestly, perplexity wasgreat.
Like really, really, reallygood.
Probably the best in terms of,it was fast, it gave me what I
(30:07):
needed.
And if I just looking to build alist of a hundred or so
prospects, great.
And what did you get?
Did you get the email list?
Did you, did you just got thename of the company?
I didn't get emails from, Ididn't get emails from, I just
got the name of the company.
so I would need to go and findthe website and the email e
email account.
Okay.
So the, the deeper research isnot good at finding email email
(30:30):
accounts when it, when it hasgiven it to me.
It's given like the info out orthe generic ones, which you
usually don't wanna beoutreaching to.
You wanna be outreaching tospecific people.
so you'd use tools likehunter.io or Novio or many, many
other email finding tools to,to, to do that at a later, later
step.
so that was perplexity.
Very, very happy with that one.
(30:51):
if we look at Grok, it did thesame the first time around.
But then I asked that clarifyingquestion, can you give me a
bigger list?
And it was a little bit worse.
I would say there was, again, abig list, more than a hundred
companies.
Broken down by country.
Broken down by type of agency inthe right format in CSV two.
No, no, no.
So this is the thing.
It just gave me a bullet pointlist of the companies.
(31:11):
So in terms of like looking atit on the web browser, it
actually looks nicer'cause I caninterpret it.
But from a use case perspective,maybe not quite as good.
It's just one step away.
In lamb though, you can justcopy, paste it into whatever a p
ai and just make, make a,exactly.
And if you're doing coldoutreach, you're, you're gonna
be chaining a few differentprocesses together.
So it's not too much of a, of anissue here.
(31:33):
and grok and perplexity we'reboth super fast.
the other two.
So Gemini was a little bit morechallenging to actually get
this, it kept reading a reportor something?
Well, it, it gave me it, but itgave me like a bunch of
different tables initially.
So like, I wouldn't be able to,to, I'd have to copy paste, and
there's a lot of text inbetween.
The one thing about Gemini, we,I think we noticed a lot, is it
(31:54):
really loves massive longparagraphs of just like
explaining what it's, it's it'sdoing.
But I, I don't care about that.
I just want the output.
I was though eventually able, Ihave to run a couple times.
Uh, but when I, when I run itthe second time, I was able to
get a, a table Okay.
Which had, agency name,specialization, location, and I
(32:15):
actually gave some notes, aswell, which I thought was
really, really handy.
Gemini and chat, GPT askedclarifying questions.
and I think that's what actuallyled chat GPT to go off track a
little bit.
the first time when I asked itto do it, it gave me, I think it
found maybe like 15 or 20prospects.
But it did find.
email addresses, LinkedIn andcontact information as well.
(32:40):
So it went really deep.
one of the clarifying questionswas number of employees.
And I think what happened is ifit couldn't find the exact
number, kept going, it didn't,didn't include it.
Ah, and so from prospectingperspective, that's maybe not
quite so good.
And that's why it found so fewprospects.
So instead I reran it and addedanother paragraph, just
clarifying, Hey, I'm trying tobuild a big prospect list here.
(33:02):
You know, if you don't find theexact number of employees, don't
worry about it.
Like, just give me a big list.
I want a big list.
Give me a big list.
Okay.
And then it gave me a big list.
so again, the formatting wasvaried.
So sometimes it would just giveme The company name and then
like a quick blurb about them,you know, size or whatever.
And then sometimes it would giveme two or three sentences about
each company.
And it was, it wasn't reallythat usable from a list building
(33:26):
perspective.
if you're doing this at scale.
So, you know, I would still sayGemini was probably the best
here, not perplexity.
so I would still say that Geminiand perplexity were, were kind
of the, the two best perplexityfrom a.
Hey, here's what I need.
And it gives it to you inseconds.
Yes.
Great.
I do think the, Gemini totallist, if we, if we'd gone back
(33:48):
and formatted it correctly,would've been a bit more
extensive, though.
It looks like there's maybe acouple hundred in here.
Well, here's my question.
The fact that you can do it viaAPI for publicity, doesn't it
make it so much better?
Absolutely.
Absolutely.
So this is, it's cheap as welljust evaluating the tech, but in
terms of evaluating what youwould use right now, then per
imagine if I broke it down, howmuch is perplexity, for the API?
(34:09):
Yeah, so it's$2 per milliontoken in, which is not a lot.
Again, close throw Sonet is likethree, it's about the same
basically.
And then out million token is$8.
Mm-hmm.
So that is 15.
GT four is 10.
Mm-hmm.
Just for comparison, but youalso pay an extra$5 per thousand
search results that it consults.
(34:30):
So it depends on how many, ifthere's 200 sources.
you pay extra dollar, basicallyfor, right.
Okay.
Okay.
so, but, but it's fair.
Like the price is pretty muchthe same as another.
Oh.
And anything, if you compare itto any kinda scraping you're
doing, you're gonna be cheaperpaying, like a lot for this.
So if you want a highly targetedlist, I think this is quite,
yeah, because you can break downyour prompt for different
industries.
(34:50):
Like you, instead of sayinglike, oh, I want a video
agencies, YouTube channels, etcetera, just run one prompt for
it.
So it just finds more for, andyou can have an LLM brainstorm
like hundreds of subes and runthat prompt like you put an LLM
function into make that outcome.
That's like, okay, like this iswhat I'm looking for.
This is the prompt.
Now fill the placeholder with ahundred variations, then make a
(35:10):
hundred API calls, eachreturning probably there would
be a lot of duplicates, but itdoesn't matter.
You can make, you can't have aDe-duplicate later.
And then you have a list of like10,000 prospects on the other
side inside the spreadsheet.
directly there, and then youYeah.
You can enrich with more otherthan stuff you, so that's what I
wanna talk about next is, isdata enrichment.
Yeah.
Okay, let's go for it.
So once we have our prospectlist, the next thing I wanted to
(35:32):
do is see if I could add a levelof personalization.
So, you know, the start of anoutreach email you, you say,
Hey, like, I noticed somethingabout your, your business.
And then you make a kind ofcomment on it.
Yeah.
And nine times outta 10 it'ssome bullshit thing.
I noticed you've been growing.
Yeah.
That's impressive.
Or, you know, it's some genericthing that makes it look very
templated.
(35:53):
So what I wanted to do is see ifI could use this to add a
convincing level ofpersonalization that would pass
my kind of sniff test to, toOkay.
To see if I, if I would, whichis better than 99% of
outrageous, I have highstandards for this because we,
we've done a lot of this stuffin, in link building in the
past, and just when you'reexposed to it so often you, you
see all of the Yeah.
(36:13):
You, you see through a lot ofit.
Let's put it that way.
I get so, so many bad ones.
So.
The good news is it was able todo a fantastic job of this.
Okay.
And it's, it's prettygroundbreaking, actually, if you
can use this at, at scale.
the prompt I used for all ofthem was basically I want to
research my prospect list a bit.
I'm a recruitment agency.
I'm looking to find interestingthings that the company does
(36:34):
that I can use for my emailopeners and hooks.
for each company give me a 200word report about what makes
them unique.
And a 200 word report of notablethings that they have done or
achieved recently.
Here are my prospect and I justgave it four prospects.
I was gonna say how many, youprobably cannot hit like a
hundred prospects in these.
Well, I gave it four to see whatI would come up with.
and we can talk about how you'dpractically do this.
(36:56):
if you're doing it at scale, theonly option that would work, I
think would be perplexity.
'cause you need one at a time.
You need an API to do this.
And in most cases what I gotback was, varying length
reports, but you know exactlywhat you would expect.
So for example, on Epic Videochat, GT found that they were
all about making videoaccessible to artists.
(37:17):
So they had like cheaper optionsfor, aspiring.
music producers and artists thatwanted to make videos basically.
so, you know, that's a usefulinsight on its own.
And for the achievements andawards these were mostly video
agencies, so it would be awardsthat they had won, events that
they had taken part in, videosthat, been particularly notable
(37:38):
and things like that.
And it was all, fantastic stuff,like really, really useful.
So what I asked, AI to do insome cases was just give me the
research and then I took thatresearch and put it into chat
GPT and said, hey, I need towrite The condensed version of
the, I noticed sentence.
and for all of them it was, itdid really, really good here.
(37:59):
I also tried to do it in oneshot though, To ask chat, GPTI
said Grok, perplexity in Geminito do the research and do the
writing.
Probably it wasn't so good.
There were more mixed results inthat case.
interestingly as well, I alsocompared feeding all the
research, the longer researchinto chat GPT and feeding it all
into Claude and Chat.
GPT did a better job I wasusing.
(38:20):
4.5, And I was using Claude 3.7Sonic.
We're not gonna do this for theAPI, but Sean.
Sure, sure.
But I just wanted to see what'sthe best that you can do.
And the problem that Claude hadhere was the, it was fine with
the observations.
So it would say like, I noticedthat you did this, or you
achieved this, or You won this.
Great.
Did really good.
But when it would say what itthought of that.
(38:42):
It felt very fake.
Okay.
Which is a fantastic achievementfor a company like yours.
I think you need to prompt it tobe authentic.
That's what usually works forme.
I did specifically use the wordauthentic.
Okay.
And I told it what not to do, soit still didn't quite hit the
mark.
Okay.
With the same prompt though,chat, GPT, 4.5, got both the
analysis part and theobservation part, like really
(39:04):
down to a T And this was withinput from, didn't matter where
the input came from.
having 400 words of content wasmore than enough, to go on.
So yeah.
Probably shorten it and do itfaster.
It can be, well say, if youdon't input too much, it can
probably work.
It's a very expensive model touse for API 75 in a hundred
fifty five, fifty out, like, youknow, compared to like the two
eight we talked about forpublicity.
(39:25):
but it's like if you're doingreally short and you're making
lots of money from your coldoutreach, then I mean, why not?
Mm-hmm.
but yeah, it's like, I think foranyone who does cold outreach,
who's going to either forselling stuff.
For getting links for anythinglike that.
Like these deep research canreally change things and make
your outreach stand out in anautomated, scalable way.
(39:47):
and you know, I have really highstandards when it comes to, to
these things in cold outreach.
'cause we've done a lot ofcontent and courses around it in
the past.
If you just wanted to do thesimplest, cheapest one shot,
give it a company output, the,the, the sentence perplexity is
good enough to do this.
So you can use that with theperplexity API, but you can make
a really good automation.
(40:08):
So let's say you basically putan, initial prompt of like, Hey,
find new prospects.
Yep.
Then you have an LLM thatbrainstorms the kind of like the
seed of the types of prospects,like our video agencies, YouTube
channels, blah, blah, blah.
Just brainstorms like 50 or ahundred, whatever.
It runs that through perplexityDeep search on the api on
make.com.
You can do that.
You kind of like reformat theoutput, put that into a
spreadsheet, basically, like aone line per company.
(40:30):
and then you take, you makeanother automation that reads
the spreadsheet for every row.
It does a deep search for thecompany, finds everything about
it.
And you have an a ap, do youhave an API call to GP 4.5 or
sonet if it, if you can promptit well enough To write the
email.
Mm-hmm.
And then essentially draft theemail into your inbox.
Mm-hmm.
And you wake up in the morningand you have like 500 draft
(40:53):
emails for called out and youcan check, you can double check
before you send so you don'tsend shit basically.
there's a couple other steps youneed to add is like finding the
email addresses.
Oh yeah.
But you can do that by calling,hunter API or something, hunter
Apollo, all these stuff.
Like, it's not super hard, butyeah, just enrich your
spreadsheet basically.
I guess there's a couple ways todo this.
You could have AI write a fullycustom email, but I think it's
(41:13):
more likely to, I don't know,could just write one line maybe.
Well, the way I was envisagingit for,'cause at cold outreach
at scale, you kind of have yourtemplate for what you, what you
want.
It's just the one line and thenthere's like maybe one or two
merge fields.
Yeah, usually the openingsentence or two.
so that was, would be the firstsuper easy to make.com.
Like you can write the email andbe like, for this, just put the
(41:35):
output of ai.
And just like you have thesevery small input output from
expensive 4.5, G 4.5 in there,and then you have like amazing
emails.
You can double check or not, youcould send the emails if you
trust it enough.
My recommendation would be todraft the emails in your inbox
and then just spend the daygoing through it, re prompting
and fixing whatever.
And then after running it fortwo weeks, then maybe you can
(41:57):
let it email.
I think to, to do this, toscale, you would need to not
have to check everything.
So Yeah.
In initially to install,initially the quality threshold.
Good.
And then let it run.
Otherwise you're gonna send abunch of spam or it's not gonna
work and you're gonna burn yourprospect list.
There's no point.
But like, it's quite costefficient compared to what most
people do.
especially in cold outreach,right?
'cause you are paying for everyemail.
You're paying for scraping,you're paying for the time for
(42:19):
personalization.
If you are doing, are doingthat.
Yeah.
Or if you're not doingpersonalization, then you're,
then you're paying wastedprospects.
Yeah.
So this will improve yourresponse rate if you're doing
link building, if you're doingcold outreach to generate lead
sales to get on podcasts.
Like anyone that spams yourinbox, like they should be using
this and it works.
And we see it works because wedo it for marketing Pros and
they're getting leads actually.
it's probably one of the bestopportunities right now.
(42:41):
in terms of like selling stuffor whatever.
So yeah, just take this awayfrom this podcast and that can
change your business.
I just wanna say rip everybody'sinboxes after this.
Sorry for your inbox.
I think it's a matter of timebefore there's some kind of like
AI filter in your inbox.
Yeah.
because I just can't see this,like it's kind of like the
golden goose right now, like,called outreach.
but get in and do it beforethat.
(43:01):
It's a period right now.
You have six months to makemoney from this.
I think of it longer, but yeah.
And if you don't make money fromthis, you can get your money
back from this podcast.
Alright, so let's move on to thenext one.
We'll talk about fact checking.
Yeah.
So I mean, I, I was a bit cheekyon this one because, um.
I wanted to fact, like, again,we come from the SEO industry.
We don't really do as much SEOanymore.
(43:21):
We still put it as part of ourmarketing mix, but it's not our
core focus anymore.
But still, we know this industryvery well.
So in this case, the, I wasthinking about, cur topic
authority guy.
So he's, he, just to explain toeveryone, like he's been on this
podcast before and it was a goodpodcast.
One of our best performingpodcasts, actually.
So go check that one out forsure.
he has, or he's very well knownfor this concept of topical
(43:43):
authority, which is a, um, atheory or, um, in, in SEO I'm
not more than a theory.
Yeah.
Practicing.
SEO, but.
It's quite complex.
Like to, to put it like lightly,he loves putting patents on
Google.
he's done a lot of research onGoogle patents and he's, come up
with a lot of concepts that hesays help based on the fact that
(44:05):
he's, he's, he's read thepatents.
Yeah.
And he, he claims to understandthe algorithm better than than
others in, in certaincircumstances.
Yeah.
Now the problem has been thatsometimes it's so complex and
the way he talks requires, likea lot of rewatching and
rereading of the things he saysin order to really, really grasp
it, that it puts people off.
(44:26):
And you challenged him on thepodcast about that.
so it did Well, people likethat.
so we thought we'd do it againby except have AI try and try
and do that.
Well, the thing is like, hereads this patterns that nobody
else reads.
So literally he's likecompletely.
Not fact checked in thisindustry.
Like people just like can't bebothered to read the stuff.
Therefore, I mean, to be fair,there's a lot of information out
(44:47):
there, so that's fine.
It's like, it's not, can't bebothered, but like, don't, so my
question was like, is he, is hereal?
Like does he, is he actually,and also is he interpreting
these things properly?
Yep.
Or is he kind like eithertwisting the reality, not doing
the research properly orwhatever, like, which is
something nobody has done in theSEO industry, right?
Mm-hmm.
And so that's what I did.
I went to to Gemini and a bunchof others, and I was like.
(45:09):
Fact check this, and I just gavethe URL of one of his blog posts
that quotes a lot of thesepatents and so on.
it just went through this.
It went like, if you checkwebinar, I'm gonna share my
screen, but I asked you to makea table at the end with like
each patent it checked.
Mm-hmm.
And whether it's accurate ornot, the way it's represented in
the article to his defense, mostof it's accurate.
There you go.
And so like, that's fine, butlike what it did, and if you
(45:30):
check the sources, you can seeit went on all these patent
pages on Google to like read allthese patents and then just
compare what was said in thearticle compared to a patent and
make sure it doesn't contradicteach other basically.
Gemini was by far the bestresult here for me.
Do you think that's becauseGoogle has access to the
Potentially Yeah.
It's not Google Scholar, it's,it's patent@google.com.
(45:50):
Yeah.
So they have the, the, the listof, and it's a big part of the
sources here.
Like the sources in Google werelike excellent, like really,
really good sources.
publicity was a little bit morecritic critical of his work, but
some stuff was kind of, Bitstupid in the, so in the
comparison table.
So for example, what do youmean?
So for example, at some point hetalks about like there's some
Turkish text that's nottranslated.
(46:11):
It just puts that into likeTurkish content in English
article section Semantic sidethree contains and translated
Turkish text inaccurate and justflag something that is accurate.
But actually it's just himshowing the case study of the
Turkish egg.
Okay.
Fair enough.
That's not really what I ask.
It's something else.
I think Google did a better jobas like identifying what was
really a patent and really goingthrough that.
Mm-hmm.
So Grok was also pretty good.
It actually went through thepatents quite a bit.
(46:33):
it was more readable as well.
the problem is like the table atthe end, it didn't really kind
of fact check.
It just gave me a bunch of blurbon like what the patent is about
and how it works, and a link tothe patent.
It didn't really do what I ask.
Whereas Germany did a better jobbasically.
So overall, fact checkingGermany is very good.
Um, because it, it, it searchesdeep and the key is like, yeah.
(46:55):
The, the report again is Gemini.
It writes lots of text and it'sdifficult.
Yeah.
But if you ask it to make atable at the end, it does a good
job.
So.
Okay.
That's kind of a trick in theGemini one is like, as I said,
very good but bad formatting,but give you instruction on how
to format.
use more subheadings, make me atable, do all of that.
And then you can almost keep thereport, read the table, and then
that's it.
(47:15):
You have your answer, exceptit's been thought about a lot
and it's done all the deepresearching.
so yeah, it's like we're notasking you to check fact check
all of Corey's article.
That's not the point.
The point is that let's say youlike, again this developer.
Yeah.
That was doing this research forthis shopping cart for us,
right.
Technically techno,technological, choices made
(47:36):
internally, discussions withpeople, teams, et cetera.
I don't have the time to checkall that stuff that was given to
me.
I can just throw the report inthere or put it on the webpage
or whatever and ask it to justmake sure everything's correct,
everything's calculatedproperly.
So you're essentially using thisas like a barometer of trust.
Mm-hmm.
So like give, give me an initialimpression about whether I
should trust this person's work.
(47:57):
Exactly.
Okay.
And it gives you, like, I don'tthink it's gonna be perfect, but
it's going to give you areas todig in.
and it also gives you a way toget back to people, challenge
them mm-hmm.
And force them to do a betterjob, basically.
Yeah.
Yeah.
Uh, and it's like, again, it'skind of like a bandwidth
extender.
Mm-hmm.
Because like you, if you workwith a team a lot of people work
on a lot of different projectsand it's hard to give each
project the attention you woulddeserve if you wanted to go
(48:19):
really deep into this.
Mm-hmm.
This can shortcut a lot of likethat.
Attention giving.
You can just throw that, readthe, had the red flags, go
through that, and then challengethe person that will then revise
all the work or something.
And that's awesome.
If you have a team, I think thatcan save you a lot, a lot of
time and a lot of mistakes.
A lot of projects failed becausesomeone didn't check someone
(48:39):
else else's work.
and then it's like it didn't panout as expected or nobody knew.
I, I'm thinking in case of likelawyers, you know,'cause they,
they always have their paralegaldo, do the work.
Yeah.
And they're supposed to checkit, but I, I suspect they just
sign it off.
Sometimes I don't, I don't knowexactly.
They, they don't have time toread all that, all the papers
that comes through their desk,et cetera.
Like you can with like betterprompts.
This was like a super basicprompt with like better prompts.
(49:00):
I think it can do 80% of the joblike a paralegal can do.
And probably you keep oneparalegal that operates this
instead of having five,basically.
I'm not even thinking of it asfrom a paralegal replacement
perspective, I'm thinking itmore from like, if you need to
sign something off thatsubordinate has done, it gives
you more trust that that thingis done correctly.
(49:21):
Without you having to read thewhole thing and then, and quite
often you just don't do it.
the reality is it's not checkedproperly.
Yeah.
Because you just don't havetime.
You're busy with another projector something like that, that
will reduce the amount ofmistakes in your company
significantly.
By just.
Running the fact checkers.
So if you're, if you're managinga team or if you're managing
people, then pay attention tothis one.
'cause I think it's quite, yeah,it's so good.
Like, I'm gonna use this a lotactually.
(49:42):
That's gonna be one of my mainuses.
So next one, you've got,analyzing trends and trying to,
trying to establish new trendsin industry.
Yeah, and I'll be honest, thisis one I got very disappointed
in.
Okay.
and it is good.
Like, let's just not hype thetech.
Like, let's just talk about whyit's limited.
Okay.
So basically, I try to take itfrom our point of view.
You guys know us if you'rewatching this podcast.
(50:03):
So I put like, oh, I'm a contentcreator focused on AI for
entrepreneurs and marketerresearch industry growing trends
and topics discuss across,across social media, blogs,
YouTube, Reddit, how can you useProduct Hunt and other relevant
platform to identify emergingSubT trends that are gaining
traction and could inspire highimpact content ideas.
Mm-hmm.
That's pretty much the prompt.
and I told it within the lastthree months as well.
And I gave you some examples.
(50:23):
I was like, example, AI agents,vibe, coding, no, code
automation.
Mm-hmm.
and pretty much all of them,apart from maybe chat g PT gave
me like super, super high leveltopics that we could never use
for content ideas.
like Gemini said, the empoweredsolopreneur AI as a force
(50:44):
multiplier, for example, pretty,pretty boring or like marketing
transformed AI trends for smallbusiness growth.
And it just looks like, oh,automate social media, automate
content creation.
Like, do, do you think that it'sstruggling to see through the
hype and the spin that otherpeople and the sources put on
this?
I think the problem is becauseit goes for like a hundred, 200
pages, et cetera, everythinggets generalized and kind of
(51:06):
like very broadened up andsummarized.
And it just ends up being veryboring.
And you like the specifics, youknow?
to be fair in Gemini, like whenyou actually go through the wall
of text and the very boringstuff, if you actually read it,
they mention specific tools foreach thing.
So they're like, oh, like forproductivity for entrepreneurs,
like Google Workspace now haslike a lot of AI stuff.
(51:26):
there is, like weeks is addingsome stuff.
So actually if you read it, it'sdecent.
But like the high levelheadlines, the structure is,
it's a lot to go through.
the trick is to actually readit.
The, I think it gives like 78tools recommendation inside the
whole report.
So not too bad.
but also, like the problem isbecause it reads the web, it,
there's a lot of old shit inthere.
(51:47):
Like, it, it feels like the webis lagging behind what's
happening if you read webpage,for example.
And blogs.
So like, it talks about GT four,which is like, mm-hmm.
Well, it's like G GT fouritself, like now it's like four
Oh.
Which is different model.
It's like, it, you can feel,it's like it's right old pages.
do you think that could bebecause.
A lot of blog posts.
A lot of articles.
They'll like update the date tomake it feel like it's more
(52:09):
recent.
So that's screwing with the Itsunderstanding of time.
Yeah.
The one that was better, likebasically grok and perplexity
were kind of in the same boat.
Like, you know, grok gave me AIgenerated influencer marketing.
Okay.
Mm-hmm.
AI for voice searchoptimization?
Nah.
Like, no.
Uh, AI in experientialmarketing.
I mean, maybe that's not reallyour stuff.
(52:29):
AI powered neuro marketing.
Like it went like broad andboring.
and then like per complexity aswell was like, yeah.
Interactive, immersive AIexperience.
Multilingual ai, global markettransformation of search.
And SEO, like boring.
Boring.
open AI was better.
like talking about mo code AIplatforms, AI agents and virtual
staff.
Specialized assistant routinetask automation.
(52:50):
Uh, it's like.
It's a little bit too broad.
And the thing is, like, when youread it, it didn't mention like
specific tools as much as Ginidid, et cetera.
So the breakdown of the highlevel topics was better.
But then when you read intothat, it was a bit more generic
and it wasn't as tool focusedand, uh, specific.
So overall, I didn't find any ofthem Very good.
(53:12):
And I think the problem is, islike, it's because it reads so
many sources, it struggles tounderstand like what's really
trending right now.
It reads also the old stuff andjust all blends together.
Mm-hmm.
And, it's, it's annoying'cause Iwould have loved it to do like,
like content planning for us orsomething like this, but I don't
think I would use it for that.
Yeah.
so it's like maybe they'llimprove it, but so far I would
not, I would not recommend forthis.
(53:33):
but one thing that I did betterat was content prep.
So like I, I did for thispodcast, again, very easy for
people to relate to this.
So the prompt was like, Hey, I'mlooking to prep a podcast on
deep research AI tools from Chadgt gr, Gini and Perplexity.
I want you to find practicaluses of these tools for
entrepreneurs and marketer.
Make sure they look into, makesure to look into UDC platforms
(53:53):
like Red Hack can use forums,social, YouTube, Instagram,
Twitter and newsletter platform,Substack, beehive, and so on.
final report should be a list ofusers with interactive,
interesting facts to share.
Yeah, they actually had the factchecking example, for example on
Gemini.
they, like, they had some, it'snot, it's not too bad.
They had brainstorm businessideas and distant customer pain
points, analyze legal documents.
(54:15):
Not too bad actually.
A lot of stuff we've talkedabout, brainstorming video
topics, this came after weactually referred the podcast,
but it's still interestingwriting blog posts and
newsletter.
strategizing newsletter growth.
So like strategy that's not sobad.
Summarizing red threads forsentiment?
Pretty good.
like, yeah, Gerini did.
Okay.
And again, I asked you to make atable at the end.
TR GPT was also interesting.
(54:36):
It did pretty good.
trend scouting, again, not sogood at that, but not a bad idea
to look into.
We wanted to look into that aswell.
Pain point mining.
So finding issues with yourcustomers.
Niche opportunities.
Identification, which is decentsentiment analysis at scale.
Emergency topic D discovery, weknow is not so good at this.
computer mention tracking,feature positioning and
comparison.
So what we did for hfs.
(54:56):
not too bad.
Like for like, when you have onepiece of content, like I would
not use everything but a couplepoints.
I would use that.
And, but I think Gemini andtragedy, g you kind of need to
sift through a lot of the, thenoise there to find one or two
good, good points.
But that, that's, that's usuallythe case with trends anyway.
Right?
And it's like, these are 5,000word report on like GPT, for
example.
It's quite long.
So I mean, if you compare it to,let's say you're using exploding
(55:18):
topics or, one of these liketrend identification tools, 90,
99% of what you find on there isnot relevant to what you're
looking for, but you need to putin a bit of work to find it.
But it's still giving you thedata, the interesting things are
in there somewhere.
There's just not lots of them.
Yeah.
And that's why quite often, likewhen you get like a 5,000 word
response from tep, you justthrow it in there and you're
(55:39):
like, make me a quick list ofthe ideas.
Yeah.
And then you just get it out.
It's so much faster thanactually reading the whole
thing.
but like, yeah, so like forplanning one piece of content,
pretty good for finding trends.
Not so good.
I here would probably just likego to a PFI and scrape Reddit or
scrape social media, findYouTube videos with more than X
likes on your topic and Thatkind of stuff that would
probably be better.
but then once you've identifiedthe topic, then this is quite
(56:02):
good.
I think also if you're usingLLMs to write your articles for
example Then it's probably worthrunning a deep search and
putting that into the context.
if you really wanna do a goodjob, you should clean it up,
remove the stuff you don't likeand keep the stuff that you
like.
But if I was doing AI contentcreation at scale, I would
definitely run this before.
'cause that would make the LMcreate just a better article.
(56:23):
hopefully tools like surfer, etcetera, start implementing this.
surfer has an opportunity to,add perplexity deep research
before the outline, for example,and they make the outline for
you.
But they could use that, throughthe API and it's not too
expensive compared to what theycharge.
Like there's opportunities inexisting tools to make.
Like I, I've always beencritical of like one click, one
(56:44):
click article.
Yeah.
just'cause the quality isn'twhat, you know, but this can
enhance these toolssignificantly.
Not necessarily to the pointwhere it's going to output an
article that will be goodenough.
But it's gonna get closer.
Mm-hmm.
So yeah, that's, uh, like we're,we're about to hit content
commoditization, at the high endreal soon.
Yeah.
We're kind of like going to thenext level.
(57:04):
And that's the thing.
It's like if these tools are notimplementing these, like
building your own automationsthat implement these things,
like, that's how you get acompetitive advantage as well.
Like if you are doing, AIcontent at scale.
It's kind of a no brainer to doa publicity deep search call
before actually.
so yeah, that's pretty much allthe use cases.
So the last use case we've gothere is around basically staying
(57:25):
ahead of everything.
And something which we do asbusiness owners is we need to
keep an eye on new laws across,across the world really, because
even though we're a UK company,some privacy law in California
or some European Union data lawwill, will affect us.
and we need to stay on top ofthose things.
And honestly, like it's reallydifficult, the way we usually do
(57:47):
it is just happen to come acrosssomeone mentioning it in a
WhatsApp group or in a forumwith other business owners.
Even lawyers and accountantssuck at it.
Right, because they don'tunderstand multiple countries.
Exactly.
You know, you speak to youraccount, our accountant in the
UK.
If you wanna know somethingabout the UK he's on, on top of
it, but something which affects,you know, you in Hungary or our
transactions with people in theus.
(58:08):
Like not, not a chance.
so how do you stay on top ofthis?
Well, you can use AI to do it.
Instead of asking an employee,Hey, go and find me all the
relevant laws to our companythat are gonna be coming out
over the next quarter, nextyear.
I imagine larger businessesthey, they, maybe they do.
'cause like, you know, certainindustries like risk management
is very, very important.
(58:28):
but you can stay ahead of thingsby using AI to do this.
So I imagine myself once everysix months or once a quarter
running something like this andjust having a skim through to
see if there's anything elserelevant that we need to pay
attention.
So the prompt I used wasexplaining I run authority
hacker, I'm looking forpotential legal issues that I
should be aware of for mybusiness.
And then I had a list of ourbusiness activities.
(58:50):
So things like selling digitalcourses around the world,
selling sponsorship on YouTube,on our podcast, on our email
list.
Collecting emails.
Sending emails.
Reviewing products and software,inviting guests onto our
podcast, making industry newsfor YouTube, hiring employees in
a bunch of different countries.
And, explained, you know, wherewe are from, where we live,
these things.
(59:10):
and I just said research.
What else we do?
And figure out any upcominglegal issues around the world
that we need to be aware of inthe coming year.
So we in trouble?
No, Not really.
Okay.
So first things first, grok, Ithink the output was very
useful, very readable.
so if you want just a quicksnapshot of the most important
(59:31):
things.
That was good.
The problem with the lower tiermodel, so grok and perplexity is
they didn't go far enough.
Yep.
Right?
So they didn't, I think,research authority hacker enough
to figure out we have customersin 140 countries or whatever it
is.
and so it really just narrowedits focus.
It just looked at UK and some EUlaws.
(59:51):
almost no US laws.
Okay.
Which was a bit disappointing.
Um, that's kind of gr alwaysright?
It's like it's really nice toread.
Perplexity was similar.
It was too focused.
It's nice to read, but it's notreally deep, deep search.
There's like deep search anddeep, deep search, you know,
it's like, it's two levels.
It's like medium deep and likevery fucking deep search.
Yeah.
anyway, Gemini and.
(01:00:14):
chat.
GPT did a much better job ofthis.
and they were able to find allthe relevant US laws, new
privacy laws, even acrossdifferent states.
and that gave me much more to,to kind of focus on there.
The problem again with Google,with Gemini was the output was
like, how to read that.
Oh God.
So the first paragraph, I'm noteven kidding.
(01:00:34):
There must be 15 lines of textin, in here.
They're so close.
I just don't know why theycan't, like they do all the
breakdown and break it down intoparagraphs that you can read it.
It drives me crazy because theydo, the hardest part is the
research and all of that, right?
It's like, and it's good.
Like it's pretty good.
The reasoning is a bit undertragedy, but the, the, the
research is very good.
(01:00:55):
And then they just output thisand it's like, it's like I was
re-prompt it in like, and justto, to make it readable.
It is just so annoying.
And slightly side note, I, Ithink that there's a, a cultural
or a, like a, a cultural issue.
Uh, Google and the way they'reapproaching this,'cause they're
building these tools for likenerdy research scientists and AI
tech, tech bros.
(01:01:17):
Um, one of the things we weretalking about earlier, and we,
we disagreed on this, is I hatethe fact that Gemini, by
default, it's got a blackbackground.
Like it makes me think like, youknow, you, you, you use
everything in dark mode, soyou're probably like used to it,
but it's just, it's really offputting.
It's like, oh, this, this isn'twhere I normally hang out.
This is, this is the coders use.
(01:01:37):
The, the paid plan is dark mode.
Mm-hmm.
And the free plan is light mode.
So, really, and that's because,and that's Apple who started
that because like if you buy aMacBook Air, for example, the
sales page is like bright andand white.
If you buy a MacBook Pro, it's,it is dark and all that's kinda,
so it's just to differentiateit.
Yeah.
So like in general, the co thecolor code intake is like pro,
(01:01:58):
it calls dark and consumer itcalls light.
Maybe it's a combination of thatand the, the lack of paragraphs
and make it difficult to readthat.
It just, it, it, it feels likeit's out of reach to, and I'm
telling you, so many people arenot gonna use this tool Yeah.
Because of that.
Exactly.
It's, it's excellent.
Not'cause the dark, but becauseof the output, formatting.
(01:02:18):
And it's like, I'm not sureGoogle wants you to use it too
much'cause it's less monetizedthan actual Google.
Mm-hmm.
so they might just be like, Heylook, we can do this.
they want to get market share atall costs at the moment.
they'll do whatever it takes,but you need to take this and
then.
Put it somewhere else andsummarize this, what's relevant?
But you get unlimited use versus10 users per month, right?
Yeah.
It's like for the same price.
and, and you know, to be fair toGoogle, they do at the very end,
(01:02:41):
they do have this actionablerecommendations for authority
hacker section.
now it, it's okay in there.
so some of the things are, forexample, co conduct a
comprehensive data privacy auditconsidering GDPR UK data laws,
things like that.
That's not really a new thingthough.
Yeah.
that's, I mean, I guess itdoesn't know what we've done in
the past, so that's fair enough.
(01:03:01):
But how was Strategy d PT incomparison?
Judge D was much better in termsof presenting.
It's actually really nice to, toread almost on the grok level
of, of readability.
I will say.
it went way overboard onbolding.
I would say 50% of the text hereis bolded and in turn would have
done that too though, whenever,fair enough.
(01:03:22):
When everything is bolded andnothing might as well be bolded,
it just makes it harder to read.
But it has broken down the keylegislation and some, you know,
there's legislative areas.
Is there anything you're gonnaact on based on this?
So, there was one, which wasaround, selling courses and
things with payment plans.
Not as much of a problemanymore.
Well, we don't sell coursesanymore.
(01:03:43):
but you know, if we werecontinuing to do that, then
there was some stuff there thatwe have to do.
Did they all surface it or whosurfaced that?
No.
So these were, Chad, GBT and,Gemini did.
Okay.
Yeah.
So the ones that you wouldactually have acted on are only
these two and you would not haveacted on perplexity?
Yeah, I mean, look, I didn't gothrough all of them and like fi
figure out, but the.
(01:04:04):
I would, I would be goingthrough chat GPT, on this.
Just'cause the output's nicer.
Yeah.
And, and again, like, it's likewith your, trends issue.
It, there's a lot of noise inhere, so I need to sift through
and I still need to think, oh,I've, I've dealt with that, I've
dealt with that.
I know about that.
What don't I know about?
And you can't ask ai what Idon't know about.
so one thing that I found thatis really handy is just take the
(01:04:24):
output and restate your goal andput that into code 3.77
thinking.
And it's like, it does a prettygood job at like extracting what
you need in a digestible way.
So it's like the kind of likemost cost effective deep search
for me.
I'll, after doing a lot ofthese, as you guys can see, is
that is just run Gemini becauseI'm not gonna run out of credit
all the time.
(01:04:45):
Yeah.
For most of it.
And then try to run through 3.7sonet, which you can use for
free on cloud.com.
and restate your goal.
It's like, I want to do these,extract what's useful for me in
a nice, presentable way.
And do, do not think though,there's a problem if it tries to
figure out what's useful foryou.
It's not necessarily gonna knowall of those things.
Right.
It won't know which laws.
(01:05:05):
I know it.
Yeah.
But like I connected to my brainyet I understand it.
But like, if we'll reformat it,we'll remove the noise.
It will do all of that, and it'sgonna be so much easier to go
through this.
And what I would probably do isthen go back to the original
research when I found a sectionthat is interesting and actually
read the original.
Mm-hmm.
so like, my only fear with thatis it might like miss out, like
it cut out some stuff that, thatthe more you summarize, the
(01:05:26):
more, the more that can happen.
But also are you gonna read theentire result?
Maybe not.
so it's like that's, that's kindof like the, the deep search for
everyone that is good enough andunlimited.
Mm-hmm.
Is this for me?
because tragedy, beauty is thebest, but it's basically very
expensive.
To use.
And I expect most people willnot like they will have 10
(01:05:47):
credits.
Most people.
Yeah.
and look, you know, we're stillin very early days of this.
I expect all of the major AIplatforms and models to have
deep research or similarfunctionality and some of them
to even push it, push itfurther.
So API usage and APIavailability, I can only see it.
But I think for the prospecting,we can already do it.
I think we will implement thatactually.
(01:06:07):
And I think for the factchecking, I'm gonna implement
that, and then for the contentprep as well, I'm gonna
implement that.
Mm-hmm.
and yeah, background check forsponsors as well.
We definitely gonna do that.
So it's like, yeah.
Can you even connect it to the,the form, you know, when they,
when they fill it in?
Yeah.
Oh, for sure.
For sure.
I can do that.
and it's like we can just haveit all in notion, and I would
probably run the output throughcloud to make it nice.
(01:06:28):
you just have something that'susable, inside your workspace.
Yep.
What's your feeling like?
You didn't use much deepresearch before this podcast.
Mm-hmm.
I used it a bit more, but likeI, it made me use all these
tools equally, which wasinteresting.
What's your takeaway?
So, in my mind, this is like, weinitially had AI models that
were, you know, trained atcertain day and they had no
fresh information.
(01:06:49):
And then we had ai, which wasconnected to search, which could
find new information.
And, you know, it was basic, butit could do it.
It's AI search, but It's AIconnected to search, on like the
hugest steroids in the world.
it really is like having anassistant go and you just give
it high level instruction and itwill go and figure out what to
(01:07:12):
do and do it.
And it's the combination offiguring out what to do and do
it.
That's so powerful.
I think we're still in earlystages of this and, you know,
we've seen, there's challengesaround formatting and getting
exactly what you want, but likeit's so much possibility with
this.
Oh my god.
it's quite scary for theprospecting stuff.
Like, wow.
I mean, this is what agentic AIis supposed to be like.
(01:07:36):
Yeah.
And like, you know, now we're,they're working on the browser
user ones.
Like we might, that that is Ithink the next level, like'cause
on the Yeah.
Walled garden.
Issues are a problem here.
You know, it can't go browseLinkedIn and the browser will
solve, go browse Facebook.
And if you, if it can act as youand it's not mm-hmm.
Getting blocked mm-hmm.
You know, Google bot or Gemini,getting blocked, at a, a site
(01:07:59):
level, then it's like, yeah,we'll be doing many more
podcasts.
Do it will do the thing for youas well.
It will not just research.
Like right now it's justinformation you have to do
something with it.
Yeah.
Eventually we'll go and do thething.
It'll be like, oh, then based onthis research, I implemented a
new way to, like, I built theautomation on your form to like
background research, thesponsors, et cetera.
I logged in and I just like madethe automation make.com and it's
(01:08:22):
just gonna do it.
Right.
Like that's where it's going.
Yeah.
It's like, I think, deepresearch is something a lot of
people get excited about andthen they realize it was work
and then they didn't use itbecause you prompt it
differently.
Like people kind of like.
Think search that's in keywords,right?
Still.
Mm-hmm.
And so you cannot do a, like asimple search, don't use deep
research.
It's useless.
You're wasting your time.
It's slow.
but like, if you have higherlevel goals, if you have high
(01:08:44):
level decisions to make,important decisions to make
people you're going to interactwith, like that's going to be
handy.
Like, so, like the backgroundcheck, we were talking about
this, right?
If you go to like a conferenceor an event or mastermind or
something You can run all theattendees through this, figure
out who you need to speak to andwhat, and it's like, talk to
these four people and that's it,it's gonna help your business.
Like they're interested in whatyou do.
They, they're looking to forpartners, they're partnered with
(01:09:05):
these other companies, blah,blah, blah.
They're interested in this.
Like, you come with like, oh,like I conveniently had the copy
of this book that you might bevery interested in.
Just take it.
It's fine.
I just finished it.
you know, like, you can be a bitcreepy on this.
Yeah.
It's a bit like stalkery Yeah.
Level.
But yeah, that's what it takes,I guess.
Yeah.
Well wait until it's connectedto your glasses.
but yeah, so like there's lot ofopportunities.
The people who will use this.
(01:09:26):
Yeah, we'll have a competitiveadvantage.
but as everything, there's a bitof a learning curve and it's not
perfect.
So we try to portray that inthis podcast.
I hope that was, useful for youguys.
we're gonna use more deepsearch.
That's it.
I would like to hear from theaudience.
I mean, what they think and howthey think they might use deep
research, in their business aswell.
So, head on over to our YouTubechannel and leave us a comment
there.
(01:09:46):
we do read and interact with allof them, so, we look forward to
talking with you guys there.
and we'll see you guys in twoweeks for another episode of
the, authority Hacker Podcast.
So don't forget to subscribe soyou don't miss that.