Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:00):
Chat GPT has over half of the content coming from
the last year. And in fact, if you look at
what is the single most likely day to have content
published from. If I were to run a query today
and I were to look at all the links that
it cites, or I were to run one hundred queries
today and look at all the links that it cites,
if you were a betting man, the most likely publication
(00:22):
date you would see in any of those contents is yesterday.
It's always the single day before is the most likely
day that you're going to see content from. To your point,
is a huge kind of wake up call. We have
to make sure we have a steady stream of news
press out there about our brand, because if it's not recent,
it may not be seen.
Speaker 2 (00:52):
Welcome to Building Brand Gravity.
Speaker 3 (00:53):
I'm Steve Halsey and I'm Anne Green here at CNS
Integrated Marketing Communications Group or so glad you joined us today, well, and.
Speaker 2 (01:01):
We've got a great episode for us today. I'm kind
of joking these days that in the sixties you had
the Summer of Love, but here in twenty twenty five
we have the Summer of the large language models. And
this has been one of the biggest shifts really facing
communications and marketing teams right now, this rise of generative
(01:24):
engine optimization or GEO, which is the acronym that you
see out there, but it really isn't just a theoretical trend.
I mean it's happening right now in real time every
time you pick up your smartphone, every time you do
a search on your computer, and it's changing how brand
visibility is fundamentally working.
Speaker 3 (01:43):
Yeah. I couldn't agree more, Steve. I mean we've moved
from a world where people are very focused on optimizing
for search, you know, search engines with links and keywords,
and now they have to understand how that is changing
with AI models like Chatchipt or Claude or Gemini, how
they're searching and discoveries, summarizing your brand story, deciding which
content gets amplified. As our recent primer on generative ention
(02:07):
optimization or GEO put it, LMS of the new gatekeepers.
I'd also refer our listeners back to an episode that
just was up, you know and being promoted with Dan Nessel.
The focus was on earned attention, not just earned media,
and I think Dan said it really well, which is
you have to be thinking about AI especially the large
(02:29):
language models. As another stakeholder.
Speaker 2 (02:32):
Yeah, and I think I think that's a very important
insight in that, you know, where a lot of us started.
Traditional PR gave us that credibility through trustee journalists, the
earned coverage, all of those things really really bolstered brands.
Now AI itself is acting like a little bit of
a reputation system where it's scanning for credible, structured recent
(02:55):
content and then it's shaping answers from it. So you know,
what's interesting to me is how it's evolving. SEO used
to be around hey, let's get in like the top
ten links. Now it's really about this intersection of earned media,
credibility and digital market precision.
Speaker 3 (03:13):
Yeah, it's so true. It's really that zero click search
environment that is quite impactful for publishers especially. But if
you want to see the data behind this, everyone needs
to check out Muckrack's new study what is AI Reading.
They analyzed over a million citations from top llms and
found that eighty nine percent of them come from earned media.
(03:35):
Their definition of earned media is interesting. I really encourage
people to check out that pie chart and dig into it.
There's a lot of good data in there. But the
takeaway is credibility and that authority measures of authority still rules,
but the algorithms from the llms are really now in
charge of surfacing them. And that's material for many of
(03:56):
the folks in our field and really any organization out there, as.
Speaker 2 (03:59):
You mentioned, really moving into this age of a zero
click environment. That's really why, as you know, we're urging
our clients to really rethink visibility in terms of an
answer first context, right. And so the GEO playbook that
we put together really put together what I would call
(04:19):
kind of four non negotiables, which is recency. AI really
does favor content from the last twelve months, in some
cases from the last day, which really requires you to
think about your content strategy or even your crisis an
issue strategy. Very differently. The structure of it. Structure is
still important. Having those digital experts to understand semantic HTML,
(04:42):
how to do the schema markup behind the scenes, but
even how to structure the data with FAQs is key
for how the AI starts to parse that. So recency
structure and then you get the two c's consistency and credibility. Right.
Consistency is you've got to tell your story across all
different media, across earned own, social, paid, they all need
(05:06):
to be aligned because that's always getting pulled together. And
then end of the day, the credibility is you're seeing
a little bit of i'll call it a renaissance, but
it never went away. But the journalistic validation still carries
as much, if not more weight than it ever did.
Speaker 3 (05:22):
And that's so powerful with journalism under such pressure, and
we've seen that now just increasing year after year, so
quite an interesting moment, especially for trade publications as you said,
that are in more specialized spaces, but we can't forget
social media as well. So it's not just about engagement anymore.
It's also about feeding the large language models. So social posts,
influencer content, you know, comment threads are really starting to
(05:45):
act as relevance signals. I think that's a great way
to think about it. There's been a lot of discussion
about the power of platforms like Reddit. You know, these
things are really critical and if your message isn't reinforced
or you're not engaging on those platforms that are at
the very least not a way of the kinds of
ways in which the topics that are relevant to your
organization or your organization itself for being discussed. You know,
(06:08):
you may find the content that matters to you is
filtered out or just not on the radar at all.
Speaker 2 (06:12):
And I love that context relevant signals. And we're gonna
be joined today by our guest Matt Dugan. He is
the head of data of Muckrack. He's going to talk
about this latest research that they did where they looked
at more than a million queries, by far the most
in depth research that has been made public, and they're
giving us the opportunity to get the first run at
(06:34):
that data. So excited to be talking to Matt. And
we have our own Lauren King with us, who's one
of our AI and GEO strategists here at GNS to
really bring that context of what does this all mean
from the client perspectives in the industries we serve.
Speaker 3 (06:50):
Yeah. Together, they're going to help us break down the
different models chat QPT versus Claude versus Gemini, how they
evaluate sources, how comms and content gets cited, and truly
in terms of rubber on the road, what folks in
our sector in integrated marketing communications, no matter where you
sit in those fields, how you can make sure your
(07:11):
brand shows up when AI generates the answer, and I
can say that this is a hot, hot topic of
conversation across everyone we're working within our walls and also
within our client walls.
Speaker 2 (07:24):
Well, I'm excited to get into the conversation. So let's
get going Today. We have a very exciting and timely
topic to talk through. What AI is reading the new
rules of earned media and GEO. What if I told
you that ninety five percent of what AI sites isn't
(07:44):
paid and that journalism might just be the most valuable
digital real estate in your strategy. Today we're going to
break down some exciting new research on how large language
models decide what to say and who to cite. So
with me today is Matt Dugan, head of data for Muckrack. Matt,
welcome to the show.
Speaker 1 (08:02):
Yep, thanks for having me. Super glad to talk about
this important, exciting topic. Yeah, very happy to be here.
Speaker 2 (08:09):
And we also have Lauren King, digital Marketing Supervisors AI
and Insights at Morgan Myers, a GNS agency. Lauren, welcome
to the show.
Speaker 4 (08:17):
Thanks for having me back. I think AI is on
top of all our brains right now.
Speaker 1 (08:20):
Yeah.
Speaker 2 (08:20):
You know, in the sixties they had the summer of love.
You know, this year seems to be the summers of
the large language models. There we go, kept it, kept
it in the LS. So we're here to unpack Muckrack's
landmarks survey and analysis of over one million citations from chat, Ept, Claude,
and Gemini and what it tells us about brand visibility,
(08:43):
GEO and earned media. Matt, I guess we'll start with you.
There's so much data data to unpack. What are some
of the highlights.
Speaker 1 (08:50):
Yeah, well you touched on it in the in the intro, absolutely,
but I think the highlights are that it's a great
time to be in PR. It's a great time to
be in communications. It's no coincidence that the things that
PR pros have known for a long time, that credibility matters,
you know, recency matters, those same things are important to
(09:15):
the AI. Of course, we'll dive into it a little
bit further, but AI is really relying on credible recent
data and we can see that plane is day in
our analysis that we did.
Speaker 2 (09:28):
Now, when you did your analysis, I want to spend
just a little bit of time up from talking to that.
So more than one million link and I thought it
was interesting when you look through you assigned a number
of categories. So categories were like journalistic, which were new sites,
journalists at coverage, corporate blogs and content owned media, press releases,
academic and research, government and geo, paid advatorial, social, user
(09:52):
generated content, aggregators, and encyclopedia. So you you had a
really nice breakdown of that. Why don't you tell us
little bit what kind of of led you to really
run this study? And then then how does that how
did that breakdown really inform the really interesting results that
you guys found.
Speaker 1 (10:10):
Yeah, of course, so you know, as muckrack, of course
we uh deal with pr pros every day, and this
kind of question of you know, what what should I
be doing, uh, what should I be doing to impact
these ais has started to come up. Obviously getting your
(10:30):
word out in the media, and then you know, even
more recently social media. I think you know, in general,
we find that our customers at mackrac have paths for
that they so it's sort of a well understood path.
But there's this new thing. Oh my gosh, all of
a sudden, these robots are talking about my brand. I
have no idea what they're saying, have no idea where
they're getting it from. And so we just kind of
(10:53):
wanted to work that backwards. We wanted to say, yeah, okay,
if we're going to advocate for the PR professional, if
we're going to advocate for here's how you can get
your message out there in the AI. The question just becomes, well,
what are they reading? So that's exactly why we titled
the study what is AI reading? I mean, that's the
exact question that we had. Of course, you know, somewhat
(11:15):
self servingly, we want to help PR pros focus on
the areas that matter. So we did a tiny, tiny
version of this study first to just see, you know,
what are the types of stuff Reddit shows up there
LinkedIn you know, Reuter's Financial to just kind of catch
a quick glimpse of what's out there. And then that's
(11:35):
how we decided we need these categories. Oh holy smokes,
the different types of prompts that you ask it matter.
So once we had seen just a little snippet of
the data, we kind of created a more rigorous analysis
that we wanted to do. Like you said a million times.
Speaker 2 (11:50):
And so, Lauren, from where you sit, is it good?
Is it batch? Should we be concerned that the large
language models are thinking more and more like humans in
terms of how they search in and process data. Well,
and we'll get into numbers here in a little bit,
but what's your take about just the speed of the
evolution and the real I guess I would call it
natural language processing that that is happening with the large
(12:13):
language models.
Speaker 4 (12:14):
Yeah, so there's a few caveats in there. It's positive
on my side, but it also takes some rethinking on
the benefits that you're going to get and even how
your users are interacting with your PR. But starting at
the top, we're basically seeing, like Matt was talking about,
this validation of trust and authority in a way that
hadn't really been able to be showcased as easily in
the past. So if you think about a large language
(12:36):
model that's behind something like chad GPT, it does two things.
When it's searching, so it references the knowledge it was
trained on, and then it reviews what's out there for today.
So if you're asking for a recent topic on crop protection,
it's going to go through recent stories, but it's going
to compare those to what it's already knowing. So if
you have a strong basis in PR, you've been putting
(12:56):
things out for years, You've got this trust and authority
already built in. Then you going to be compared to
what's out there and you're more likely to show up
in these results over time. So beneficial, but a completely
different approach you have to consider. Also keeping in mind
that your users are less likely to go directly to
your site now and they're more likely to ask a
complex query, maybe comparing as a company your PR results
(13:17):
to a competitor and seeing what comes out. So you
have to reframe everything entirely, change the way the language
you're using is positioned, changing and incorporating SEO in and
even stronger but slightly different approach than before, and having
a good basis in how these models just select stories
in the first place.
Speaker 1 (13:34):
That's great, and actually I want to I want to
add one just one short thought on top of what
you said. I love that you called out the fact
that there's kind of two approaches that these models take.
They reference their training data and then they'll look at what,
you know, what sort of recent news is out there today,
as you call it. I really want to underline that
(13:55):
because what I found, you know, since we launched that study,
that second piece. I think a lot of people don't
even realize that second piece because when chat GBT first
came out, you know, feels like it's been here for
a while now, but I guess it was only three years.
Three years or so. Everyone had that experience. I mean
(14:15):
everyone is used to that thing where it said like, ooh,
I can't comment on that because my training data only
goes up to you know, blah blah blah year. But like,
just to be very clear, like what Lauren and I
are saying here is it doesn't really work like that anymore.
They they got burned by that. Obviously, they're a business,
they're trying to make money, they're trying to be credible,
and so what they said is, wow, we need a
(14:37):
way around this. So they now have this sort of
two prong approach. Yes, they rely on their training data
and it's useful and as you said, if you've had
years of positive coverage, you've got a nice foundation. But
in addition to that, it is surfing the web in
real time after you make your query. And that's kind
of the crux of this study that we did here
(14:57):
at muckrack. What we're calling these citations.
Speaker 2 (15:00):
Well, and what I think is fascinating is and of
course we all had that experience. I had the same
thing too, and I'm like, oh my gosh when I
first started expermed, this is great, and it's like, yeah,
but we can only get your data from two to
three years ago, which is of only so much value.
But what it can do now, particularly with the recency
and the sourcing. But here's some really cool stats from this.
(15:21):
Like I said at the opening of the show, ninety
five percent of the links that are cited by AI
in this study are from non paid sources, and so
we gave you what those categories are. Eighty nine percent
of the citation comes from pr driven sources, so that's journalism,
the blogs, different things like that. And then to me
(15:42):
what really gets interesting is then when you overlay the
lens of recency, nearly fifty percent of the citation are
from journalistic in nature. And to me, I saw that
set and that was pretty revolutionary to me. Max. So
can you just talk talk a little bit about about
(16:02):
those stats and what that what that means, and how
we should think about comms in light of that.
Speaker 1 (16:09):
Yes, and so I love that that one was eye
opening for you. I guess it can open your eyes
even further, raise your eyebrows even further, because it's it's
about fifty percent of the links that are cited. But
if you think about it, each time you're asking an
AI one of these questions, it's actually citing multiple links.
So odds are I don't have the exact number, but
(16:30):
it's something, you know, like seventy percent or eighty percent.
If you're asking a time sensitive question or an opinion question,
something about recency, more likely than not, you know, seventy
eighty percent that at least one of those links, at
least one of the multiple ones will be you know,
journalistic sort of major media outlet or niche media outlet
type type type of links. So yes, absolutely fifty percent overall,
(16:54):
but also most of the queries, the vast majority of
the queries are going to come back with at least
one of those And so, I mean, you know, to
your point, it's just a very important piece of that pie.
Speaker 2 (17:06):
So so, Lauren, what does what does this mean for
the enduring value of media? And see, I grew up
where you know, my first job was, you know, I
had a list of reporters to get on the get
on the phone and get your client in newsweek, get
your client in business week, get them on the front
page of the Chicago Tribune. That really shaped a lot
(17:28):
of my approach to storytelling and what you needed to
do and how you prove relevance to to media. Now
we're seeing that, you know, the enduring value of media
continues to come through as part of the comms mix.
I mean, what, what's what's what's your take on that
from somebody whose background is more digital in nature versus
(17:49):
somebody like me who really grew up in the process
of working with and proving value to media on a
daily basis.
Speaker 4 (17:56):
Yeah, So the first thing that comes to mind out
of that is as viewing them data as well, what
was really jumping out at me is how different some
of the sources were, some of the companies that were
being pulled from and the niche news organizations, And so
you got to rethink where you're trying to get your
your self showcase essentially and make sure that's actually the
place is that when your audience is searching, that's the
(18:17):
sources that are going to show up for them. So
there's a real customization aspect here that probably involves doing
a deeper dive into your audience's digital preferences. Are they
using Claude, are they using Chat, GPT, what's their preferred
AI tool? And a lot of that's still emerging. I mean,
this is still very very new for the majority of
consumers and for those who are adapting and trying these
(18:38):
different tools. So there's not going to be full answers yet.
It's going to take some experimentation. But once you've got
to grasp on that, that gives you a better handle
on the targeting you should be doing for those stories,
because you really do want to specialize, you know. I
know that tech Radar is one that comes up quite
a bit across a lot of different llms, and so
from a technology background, if they weren't in consideration before,
(18:59):
they're now basically going to be preferred when it comes
to any kind of technology product as long as the
review is showing up well.
Speaker 2 (19:05):
One of the things I think is interesting then, Dwayne,
I hadn't really thought of that until you mentioned it, Lauren,
is you know, I think a lot of us by nature,
default to the tool that we use. So if you
use Chat all the time, you're default into Chat. If
you're using Gemini, similar thing, Claude. But what you bring
up is a really interesting point that when we're putting
(19:25):
together programs as communicators or as agency professionals, as we're
counseling clients, we really need to think about not large
language models as this ubiquitous thing. But but I guess,
you know, Matt, kind of from this, you're basically saying,
they each have their own a little bit of a
style that needs to be taken in account.
Speaker 1 (19:45):
Yeah, they absolutely do. I mean, I think even a
couple of times I've been chatting with folks, you know,
and I've even called it like a personality. They do.
They behave differently in of course in the way that
they talk to to use that word, but like in
the way that they you know, create sentences, but also
(20:05):
in the content that they read, which is I think
what's most important here. In fact, we actually see that
even even just the quantity of content that they read
and the quantity of contents that they cite, they'll even
have different patterns. Some of the tools will cite articles
that that that sort of map to their entire response.
(20:29):
Some of the tools will just cite one article for
like one sentence at a time. And this is something
that's changing over time too. To add another add another
dimension to it, is these different tools are maturing and
growing and changing. So I definitely agree with Lauren's kind
of overall point, which is, like the best way that
(20:50):
you can sort of approach this is to just do
a little bit of reflection on what's your audience, what
are how are they thinking of that, what tools are
they using, and just kind of optimize for.
Speaker 2 (21:00):
So Lauren, my question for you is, you know you
you counsel a lot on what I'll call niche audience
I hate to call knees because they're pretty big, but agriculture,
advanced manufacturing, highly complex B to B supply chain based companies.
How does that kind of niche focus play out in
(21:24):
what communicators need to think about when when they're not
going for tech radar, when they're not going for raidar,
Reuters or Bloomberg.
Speaker 4 (21:31):
Well, it does go back to understanding consumer preference or
audience preference. You know, if you take my dad, for example,
he's a corn and soybean farmer in Michigan, most of
his interactions with the Internet are on his phone. He's
not using a desktop ninety nine percent of the time.
He's going to be using his phone looking up very
technical information around crops, around crop protection tools, around markets
(21:53):
for the day, and so his behavior as a farmer
is going to be dictated by that. It's also going
to be dictated by the type of phone he has.
You know, there's been some reporting recently that Apple wants
to maybe look at using Google Gemini as the background
for Siri. So if you're just making the assumption that
your iPhone users are going to use a certain app,
you need to be aware that their behavior is actually
(22:13):
going to be dictated partially by a Google based AI,
which is going to show completely different lengths than if
it was a open AI that had built that partnership
around Siri. So it's multifaceted. If you were to go
more towards our veterinarians or some of our other technical
industry specialists, there's a chance that they're going to be
more on desktop and they're going to be choosing the
(22:34):
type of tool they might use. They might move more
towards perplexity because of that deep research need. In that case,
you're also going to have to consider a completely different
platform for sourcing, and so deep audience knowledge is very,
very important. There's room here to help using AI, you know,
just even going into AI tools and asking those types
of questions to simulate where maybe a veterinarian might look
(22:56):
on their own. But you have to think very omni
channel and you have to think with constant audience preferences
in mind.
Speaker 2 (23:03):
So let's talk a little bit about recency. And this
is the part where I remember remind our listeners that
I'm actually a dinosaur. That like, when I started in
this industry, the process was I would call up a
reporter or an editor and say, hey, I got something
really cool. I'm going to be sending it to you,
and then I would put it in this thing called
the US mail, so I'd have to wait three to
(23:25):
five days. Then I would follow up and say, hey,
did you get that thing I sent you? And then
let's talk about it, facilitate the process of interviews, cover
all that and so typically the lead times we were
working with if we're dealing with a trade publication, was
about three months from when we started the pitch to
where the story was placed. Obviously, if you're dealing with
(23:46):
TV radio or daily news, the cycle was a lot
more compressed, but it had a certain cycle to it right,
and then as the twenty four news hour cycle shrunk
to by the hour, to by the minute to now
by the second, you know, you saw a lot of
things like really revolutionary moments like on the Arab Spring
(24:07):
where you saw reporting starting from smartphones, off of social
media sites that then led to feed that then led
to breaking reporting, then led to longer form reporting. I
guess it's a long ways to say the speed of
this gets faster. So I guess, Matt, what's going to
be interesting to me is recency rules right now. So
(24:30):
when we were talking earlier, you were saying like open AI,
like favors articles from around the past twelve months, Claude
may go a little bit deeper. So I want you
to talk a little bit about that. But do you
think we're going to enter the same cycle of speed
that's happened in other parts of the media consumption get
translated into how the large language models start pulling their information.
Speaker 1 (24:55):
Yeah, it's a good question. So yeah, I'll start with
the first half your question, then I'll and then I'll
touch on kind of the.
Speaker 2 (25:02):
Part where I'm a dinosaurs that.
Speaker 1 (25:04):
Yeah, I'm going to talk more about US mail if
we can first I'll touch on, you know, like what
we saw in the study, and then I'll kind of,
you know, maybe make a little bit of guess as
to where this is going. But absolutely what we saw
in the study is that when these sources, excuse me,
when these AI systems are citing sources with dates associated,
(25:27):
you know, like a Wikipedia page, for example, doesn't really
have a publication date in the same way that a
in the same way that a news article or even
even like a LinkedIn post or YouTube video. I mean,
those have clear publication dates. So for any of the
type of content where we're able to discern a clear
publication date, it's very clear that stuff in the last
(25:48):
twelve months gets cited more often than stuff you know,
four years ago, five years ago. And again this is intuitive,
very clearly, the AI systems have been built and designed
to favor recency. Now what's particularly interesting is that that,
(26:10):
you know, how could I how could I call that
amplification of the stuff from the last twelve months over
stuff from four or five months ago is even stronger
in chateapt than it is the other tools. Actually, chat
gept has over half of the content coming from the
last year, and in fact if you look at what
(26:30):
is the single most likely day to have content published from? So,
like example, if you know, if I were to run
a query today and I were to look at all
the links that it cites, or I were to run
one hundred queries today and look at all the links
that it cites, if you were a betting man, the
most likely publication date you would see in any of
(26:52):
those contents is yesterday. It's always the single day before
is the most likely day that you're going to see content,
which I think, to your point is a huge kind
of wake up call of like, wow, we got to
make sure easier said than done. We have to make
sure we have a steady stream of news press out
(27:12):
there about our brand, because if it's not recent, it
may not be seen.
Speaker 2 (27:16):
That is amazing that the recency is really really day before,
and the implications of that are are pretty pretty significant.
And Lauren, so, so how do you how do you
process that in in like the difference between dealing with
kind of breaking trending things versus maybe some some more
(27:37):
cyclical industries where you're not going to have necessarily that
that flow of like the daily news, what what what
do you need to think about or kind of like
what you mentioned the other day, which was like, hey,
you know, this large language model is as good as
the others except when it isn't right. And you were
just talking about the lack of information. Maybe you could
(27:58):
talk a little bit about that and some of the
more specialty industries.
Speaker 4 (28:01):
Yeah, So agriculture is again a great example of this.
There's a huge typical nature built in, very seasonal. You've
got harvest and planting, and so what people are looking
for is going to change around that, and you have
the ability to plan to some degree, but also you
need to have some flexibility into your pr strategy for
breaking events, like say a report comes out that says
peach harvest is going to be down, and that report
(28:24):
all of a sudden has completely changed the conversation around peaches.
You were going to do a press release, I was
going to get some attention, but now you're going to
be looped in with all these other stories that are
pretty negative. So you have to stay on top of
that even if you're planning for this seasonality, because you
are part of an aggregated mix of media content that
an AI is now going to cite and not discern
(28:45):
between unless you're going very very specialized in the query
or in the prompt that you're sending, so keeping that
top of mind and that you don't want to probably
have this very very rigid pr plan. You want to
build in these blocks, but then pay pretty close attention
to swap things out, pause or shift as new. Obviously,
if you're delayed by a day or two, it's hopefully
not going to be a huge impact to you, but
(29:05):
might completely change the stories that you're being referenced alongside,
which is one way to think about it. And then
also just staying on top of what is most popular
in keywords. Having a strong SEO basis is really really
valuable here because we're still using keywords for search determination,
and AI is as similar to Google in that way,
especially if you're using Gemini, in that it's using a
(29:26):
lot of the same functions that a traditional ten blue
link page would just disseminate. It's not everything involved, but
it is part, definitely part of the conversation. So if
you have seasonal changes and the keywords that are popular,
you have to keep that in mind too, But you
have to keep an eye on the news probably at
all times. And AI is actually a good tool to
do that, so you're getting some type of real time updates.
Speaker 2 (29:48):
Really thinking about and always on PR and COMM strategy.
I think is interesting. And Matt, the other thing I
thought was interesting was, based on your guys' findings, was
how different industries sourced a little bit different Finance and
media was very highly journalistic in terms of the citation rates.
(30:09):
Healthcare showed a little bit of a stronger representation a
government or NGO sources than others. Hospitality indicated a little
more towards owned media. So I thought that was fascinating too.
And maybe you could talk a little bit about is
it the queries of nature or or is it the value?
(30:31):
What creates that change and what are the implications if
you're in one of those industries.
Speaker 1 (30:37):
Well, what we learned from this study is exactly what
we can see from analyzing large amounts of data. So
in other words, we wrote, you know, hundreds of thousands,
millions of queries, we analyzed millions of links. We can
make some educated guesses as to the why. Behind the scenes,
we don't truly know it. You know, these are obviously
(30:58):
highly highly arted and held trade secrets essentially of these
AI systems, but as someone myself who has, you know,
spend time building AI systems. I would place a large
likelihood on the fact that some of this behavior has
(31:19):
basically been you know, instructed or steered into the way
that the models behave. For example, you say, you know,
healthcare is citing a lot of NGO and government sites.
This is this is true, Like the if you ask
it anything about medicine or healthcare, it's going to cite
(31:39):
the CDC, or it's going to cite you know, various
other like at least in the US, if you do
a query from the US, it'll cite these you know,
US government entities. We've seen especially you know, I don't
want to get too far into these weeds, but we
know from everything that's been happening with COVID nineteen a
couple of years ago, people having different opinion onions on
(32:00):
what's right what's wrong. You know, I believe this. I
believe that. Trust me, the AI models want to stay
out of that game as much as possible. They don't
want to be seen as having a opinion. Really, so
I do think in some ways for some of these
touchier subjects, I do think the AI are trying to
(32:22):
almost remove themselves of any sort of editorial opinion and
sort of stick stick with the government stance. So I
do think that explains at least some of it.
Speaker 4 (32:33):
I know that Sam Altman from OpenAI has talked about
that a bit, that the default state should be this
neutrality to some degree for what you're getting. But the
expectation is, at least for them with GPT six, because
they've already started talking about it, that personalization towards your
preferences is going to become the norm very quickly. And
Google is doing this too. You know, you have the
(32:54):
option in certain cases with Gemini to choose preferred media
sources and it might change things. It's not going to
limit the other ones, because it'll go beyond your preferred
sources as needed, but it's going to prioritize and that's
something that's dictated on the individual level as well. So
going back to understanding where your audience is looking, what
do they prefer, what are their niche sources, what are
(33:15):
their companies they're looking at. Assuming that becomes learned behavior
for everybody using these tools, that's going to become even
more important to understand.
Speaker 2 (33:24):
So, Lauren, one of the things you were you were
talking about a little bit earlier, was I'll just kind
of put to I'll just kind of call citation friendly traits,
high authority, timely structured things like that, and Matt covered Hey,
the most recent thing that you're seeing a lot more,
you know, for day was yesterday, which then leads me
(33:44):
to ask about, so what about geo's role in crisis
and reputation? Right, So the model is going to process
the way it is. How do we need to think
about how we start structuring response to what the large
language models are going to pull through generative engine optimization,
What are some of the risk of outdated or absence citations,
(34:05):
and what do communicators need to think about in this
expanded environment where llms are now as much a channel
as anything else.
Speaker 4 (34:16):
I think this is really important and it's going to
continue to grow. I mean, you see responses, So go
on X or go on Facebook and look at the
responses around a news story. Oftentimes somebody will have asked
an AI about that new story to get a quick
summary of it, and then just dropped it as a comment.
So people are used to going in and learning about
a crisis through an AI tool, whether that's right or wrong.
(34:36):
And whether they're actually verifying the information. AI still has
pretty high hallucination rates. I mean, GPT five, they're very
proud to get around that five percent mark in most cases,
which is still probably millions and millions of queries a
day that are presenting something completely false. So with that
in mind, I also thought it was important to note
that while recency is still part of this, there's nothing
(35:00):
stopping a summary from going back in the past and
pulling from another crisis you've had. So, say you're a
business that had to shut down a manufacturing plant last
year or two years ago, and then you've just had
to shut one down recently. As people are searching, it
is very possible that those prior events will be incorporated
into this summary in addition to what's happening now. So
finding ways to make it distinct, you know, to separate
(35:21):
what has happened in the past from the present while
maintaining that GEO perspective of this answer first query, to
try and make sure that you're getting prioritized is going
to be pretty important. There's still a lot to learn
about crisis management in terms of how it's being reflected
by AI, because there's no person that is going to
have any context behind the story at all. It's usually
(35:42):
not going to be framed in part of a larger
discussion around the economy or competitors facing the same thing
unless the user is actually asking for it. So once again,
it gets complicated, it gets layered, and it gets nuanced.
But I think there are some steps emerging on how
to treat it, and Matt might have some ideas there too.
Speaker 1 (36:00):
It's impossible to predict a crisis before it happens. Obviously,
if we could, if we could do that, we you know,
we'd be somewhere else. But content is still going to
be cited from Monday and Sunday and last week and
the and the month prior. So the more that you
can make sure you have sort of a well rounded
approach most areas of your business, most aspects of your
(36:22):
brand covered. Again, you're never I'm not telling you to
predict every possible crisis and make sure you have a
piece that you know counter of course, no one can
do that, but you know, at a high level, any
major categories of your brand, major categories of your business,
even maybe stakeholders, you're c suite, maybe your customer, like
if the more you can have a well rounded pr
(36:44):
approach that sort of touches on these various aspects of
your business. Just to have some content out there for
when it inevitably is going to go look for it
when the crisis comes, I think the better well.
Speaker 2 (36:55):
And one thing I thought was was interesting was the
other day I was working with a group. We're running
a workshop looking at how to quantify reputation for a
very large company that is in a very complex, very
highly regulated, highly politicized industry. And as we started really
(37:16):
mapping where their reputation played, it was it was with journalists,
So who are the journalists that matters? What what is
the earned media? It was what is being said on
the social channels? What are the things we're seeing there?
It was what was coming up in the Google search
and seo as a channel. Reddit popped up as its
(37:38):
own kind of channel in terms of where are employees talking,
what are they saying about it? And then interestingly enough,
the large language models kind of came up as a channel,
and there were there. We didn't have a million queries
to look through like our friends at muckrack did, but
it was interesting to me that in a surface glass,
(37:59):
how each of those told a different layer of story,
based on perspective, based on recency, based on number of queries,
based on population. On that that, to me, it was
really a big eye opening moment that there are so
many channels that need to be managed for for reputation,
(38:20):
and the large language models are one of them. And
then I guess, Matt, for you guys, the challenge is
like traditionally and a significant part of your business is
really about, you know, connecting those right journalists for the
right stories with what they're talking about. So now you've
got you've got now all these different channels which are
(38:40):
channels of opportunities, but how do we how do we
mix those together or how do we how do we
think about this? Because it could also become paralysis of
too many channels. So I'll just do what I've always done.
Speaker 1 (38:52):
I can tell you a little bit about how you know,
I'm thinking about it. So one of the things that
is nice about these AI I these AI opinions, if
we want to call it that, that anthropomorphize it in
that way, is unlike maybe other sources like yeah, okay,
maybe our brand is well connected with several journalists, how
(39:14):
do I really quantify that like you measure you mentioned
in this workshop you how do we how do we
really put a put a value, put a numerical value
on that. Well, the nice thing about these AI systems
are you know, for right or for wrong. They have
their their benefits and their and their downfalls. But boy,
it is easy to measure them, right. We can. We
can just say like, hey, here's the here's the one
(39:35):
hundred queries that users of our brand are going to
be wondering about. And I'm just going to go smack
those hundred queries every day and see how often my
brand is favored in a positive light. And every single
day I can look at my number, am I sixty
percent and my seventy percent and I eighty percent? And
it's you know, of course there's nuance for deep dive.
Is it mentioning this aspect of my brand? Is it?
Speaker 3 (39:57):
Is?
Speaker 1 (39:57):
It? Is it mentioning this crisis that happened? But to
take a look at at a high level, you know,
it is fairly quantifiable, which is nice. I think the
other thing that you know, I tend to like about
it is, as we talked about, it does sort of
encompass a lot of the other pieces that you mentioned
that since AI is citing journalists, and it is citing Reddit,
(40:22):
and it is citing social media, and it is citing
even video content at times, like to some extent, you
can consider the AI responses as a little bit of
a kind of amalgamation of everything. Combine that with the
fact that it is fairly easy to measure. You know,
(40:42):
I don't want to say easy to measure, but it
is measurable. Uh, makes it, I think pretty attractive part
of of a of a of a sort of you know,
reputation management strategy.
Speaker 2 (40:52):
So Lauren, how about how about from your perspective, I
know you're dying to get a little bit more into
a little technical, little on page little we should structure
things because, as I said, you know, the the journalistic
component was a significant part of what comes out of here,
but also just the way corporate page and corporate news
are structured. So maybe maybe you can give us like
(41:13):
a super high level tutorial at least of what you're
seeing there.
Speaker 4 (41:18):
Yeah, so from a structure standpoint, you know, we have
a few recommendations that we're pursuing with clients right now
that are slightly different than in the past, designed to
make it easier for AI to read. Obviously, staying up
to date is probably the most important going off of
Matt's point of how quickly this is all being pulled in.
Recency is foremost, and you want to have your team
on top of that using answers and questions, So just
(41:41):
being very very clear of doing the answer first, whatever
it is, and then putting the question in there as well,
trying to align with what the queries are defining complex
words related to your industry. So going back to that
niche side of things, if there are topics or concepts
that aren't necessarily well known, or maybe in your own
research you've seen that AI isn't very good at explaining them, well,
(42:03):
assign that definition yourself because that's going to demonstrate your
knowledge base and it's going to give it make it
easier for the tool to basically pull that definition into
an AI summary that's being generated for you. I did
briefly want to go back to what Matt was talking about, though,
because thinking from an overall PR and marketing perspective, what
you were both saying around basically operating in sync. Essentially,
(42:25):
you know, it's really important to stay aligned on the
messaging being out there across these different groups. Right now,
it's probably going to be beneficial to have you a
key message you want to share, but then also allow
your individual PR marketing sales teams to modify that message
in a way that works best for their audiences, because
then you're going to be hitting this in very different ways.
(42:46):
AI is very relational. I mean they talk about vibes
and the word vibe coding, and the words vibe coding
and these vibe things have kind of become a joke,
but it is also true and that it's very good
at understanding non numerical relationships between things, and it can
it can grasp that the message being shared at a PR,
even though it's position slightly differently, is similar to the
(43:07):
message being positioned by sales. And so if it's getting
it from all these different sources, the odds of it
showing up your message that you really want to have
shared showing up in a recommendation is probably going to
be a bit higher. So that flexibility, but have a
strong core balance I think is going to be pretty
important moving forward here.
Speaker 2 (43:23):
So, so Matt, what what's what's next? Where can people
go to learn more about what AI is reading? What
you guys did in the Generative Pulse and what's what's
next for how you guys are further exploring how large
language model behavior might evolve.
Speaker 1 (43:39):
So, of course, if anyone wants to read more details
about what I've been talking about here, you can find
it on our website. It's a specific sort of product
within the MOCRAC suite, so we call it Generative Pulse.
You can find it at Generative Pulse dot AI. And
specifically the report with the statistics we've been talking about
on this podcast are at Generative Pulse dot ai slash report,
(44:02):
so you can find that there. As far as what
we're doing to kind of keep keep tabs on this
absolutely so for those unfamiliar with mockrack, right, we are
a kind of all encompassing PR software tool in particular
with a phenomenal database of journalists, media outlets, what they
(44:22):
write about and what's cool now, you know, get on
my soapbox a little bit. Is that now not only
do we know who the journalists are and what they
write about, but we know which journalists are influential, which
journalists are have the ear so to speak of AI,
the AI whisperers you could call them So that's kind
(44:44):
of like another angle that we're folding into our application
to figure out, you know, which journalists have the ear
of AI. When you get you know, maybe you send
a pitch through muckrack, can we see that that pitch
has result it in earned media? And then can we
see that that earned media has been cited by AI?
(45:05):
Kind of trying to bring it all full circle to
that point we were talking about earlier, of really quantifying
the value of your of your PR efforts as far
as like us keeping tabs on the research. We are
absolutely doing this. You know, CHATCHAPT just came out recently
which has slightly different patterns, and even still the existing
(45:28):
models CHATTYPT for Anthropic Claude Gemini, they're always tweaking what
they search. So, you know, Lauren, Lauren brought this up,
but I do want to underscore it a little bit, like,
you know, these are huge tech companies that are constantly
running little micro experiments and they might realize like, oh,
(45:49):
if I throw in more LinkedIn content in here, people
are more likely to do X, Y Z. Or maybe
it's not even that, maybe it's more subtle. Maybe it's
not just if I throw in link LinkedIn content. Maybe
it's if I throw in LinkedIn content that has a
bunch of emojis in it, Like you know, even just
little things like that are constantly being experimented on. And
(46:10):
so we're absolutely staying on top of this, you know, selfishly,
our business depends on it, So of course we want
to be on top of this, and we will continue
to publish more research as as we did here and
as we discussed here to kind of keep the community informed.
Speaker 2 (46:24):
Always appreciate it, Matt, And in full disclosure, we have
been we have worked with muckrack for for a number
of years. Greg Gallant one of the co founders and
has been on the show multiple times. Loved the vision
that he has for the industry, and you know, Lauren
as well, is available to anybody to connect. He's one
(46:47):
of our digital leaders here in the group and can
be found at Morgan Myers, a genus agency. So let's
close this up with with with final thoughts. So, Lauren
is what does this all mean? On back to our title,
what AI is reading the new rules of earned media
and geos? What's your what's your top takeaway from today?
Speaker 4 (47:08):
My top takeaway looking at this is that you're going
to need some expertise to really start interacting with AI,
and it's essentially a new medium. It's a new way
of working with the internet. It's it's a new way
of getting the news. It's changing how your audiences respond,
and so you need to have a very comprehensive plan
that is flexible, that can respond to major events in
(47:30):
the same way that you or I just doing our
individual lives. So keeping that in, you know, that core focus,
maintaining it, understanding how these systems actually work, and then
being flexible enough to respond to what's changing in the
world is probably going to set you up pretty well.
Speaker 2 (47:45):
Matt, how about you, what's your big takeaway?
Speaker 1 (47:48):
The way I've been thinking about this is this is
really a game to be played. It has rules, it
has players, there are strategies, and the more you can
think about what the rules are in your sector, figure
out you know who the players are in your sector,
(48:08):
in your niche, the better, of course, you know. Selfishly
at Muckrach, you know we're trying to help our users
do this with this product that we've built out called
Generative False. But absolutely it's a game to be played
and the only way to win it is to realize
that you are in fact part of this game and
to start playing it.
Speaker 2 (48:27):
And from where I said, this really reinforces the importance
of storytelling and what is that story that we want
to tell? Even just getting back to the title of
you know what this podcast is, how do we build
brand gravity? Right? What is that story? What is that
thing that attracts people to our brand? And as I
reflect on today's conversation, my big takeaway is that generative
(48:50):
engine optimization or GEO, it's no longer optional. It is
the new frontier of earned visibility. And the data in
the Generative Pulse really shows that earned media has never
mattered more. But as you both have said, the rules
have changed. So thank you Matt, thank you Lauren for
being on the show. I invite all of our listeners
(49:12):
to connect with Matt and Lauren to learn a little
bit more. Certainly a topic we're going to continue to cover.
And I thank you for joining us on the summer
of as I said before, not summer of love, it's
the summer of large language models. Please tune back soon
to join us for another episode of Building Brand Gravity.
(49:33):
I'm Steve Halsey, one of your hosts, thank you for
joining me,