Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Michael Hartmann (00:01):
Hello everyone
, welcome to another episode of
OpsCast brought to you byMarketingOpscom, powered by the
MoPros.
I'm your host, michael Hartman,joined by my co-host, naomi Liu
.
Today, naomi.
Naomi (00:11):
Hi, it's been a minute,
michael, yeah.
Michael Hartmann (00:13):
It has been.
Rizzo is, I think, the daywe're recording.
This is day two of inbound for-.
Naomi (00:20):
I think so yeah, In San
Francisco this year, right.
Michael Hartmann (00:25):
Yeah, san
Francisco, and getting ready for
Mops Bluza, which is now lessthan two months away.
So, yeah, well, I see you times.
I am still up in the air.
I think I said on one of ourrecent episodes I need to do it
up front, like I am open todoing something crazy and stupid
.
If somebody would pay for me togo, like I will dress up in
something whatever.
But I think we could do it.
We could do a live episodemaybe, right that?
Naomi (00:48):
would be great yeah.
Michael Hartmann (00:49):
Yeah, so all
right.
Well, let's get started.
Today we are digging into theintersection of AI, brand
strategy and trust, a topic thatcontinues to evolve rapidly.
So our guest today to talkabout this is Karen Kranich.
She is Director of Applied AIStrategy and Experience at is it
Deep Depth, Deep, Deep Depth.
Okay, she's worked with majorglobal brands like PwC and
(01:13):
brings a deeply humanperspective to AI adoption, from
internal employee trust toexternal customer experience.
So we're going to explore sixkey considerations for building
trust in this new AI-enabledworld and what it means for
marketing and operationsprofessionals.
So, Karen, first of all,welcome.
Thanks for joining us today.
Karen Kranack (01:30):
Thank you so much
for having me, Michael and
Naomi.
Michael Hartmann (01:33):
Well, it's
good, well, so we framed this
episode as around building trustin the age of AI, and it's
interesting.
As I have conversations withpeople, I get really mixed
signals about how some peopleare really like I'm doing, ai,
do a bunch of my thinking for me, or it's a partner, and others
are like I'm using it, but Idon't trust it Right.
(01:54):
So why do you think this issuch an important and timely
topic right now in the kind ofevolution of AI?
Karen Kranack (02:02):
I think it's
super timely and topical because
I think all of us are touchedby it on a daily basis.
So, for example, like two daysago was on YouTube you know when
you always get served adsbeforehand, right, and I saw
Oprah selling Himalayan pinksalt and I thought, oh, it's
Oprah selling Himalayan pinksalt.
And then I thought, wait, wouldOprah sell Himalayan pink salt?
(02:24):
And just in taking that pause,I looked more closely and I
could sort of see that the audiotrack was not totally syncing
with her voice track, um, and Irealized that this was ai
generated.
And so you know, it's one ofthose moments where, if we
aren't really paying attentionanymore, uh, we're just lost in
what's what's real and what'sreality, and so I think, with
(02:44):
all the AI slop out there, itwas a great example of like AI
slop, and so I think you know,we're being barraged with that
now, and that's the first reasonwhy I think it's important to
like sort of learn about thesetools and pay attention to them.
The second reason why I thinkit's such a timely and important
topic is because you know,we're recently going on vacation
.
Literally.
Such a timely and importanttopic is because we were
(03:05):
recently going on vacation,literally like a week ago, and I
used ChatGPT's agent to bothfind me a gluten-free restaurant
in Wellfleet in Cape Cod andbook it for me book a
reservation, and what was sointeresting about it?
I don't know if either of youused that yet or played around
with those.
Michael Hartmann (03:21):
I keep meaning
to, but I've been playing with
other stuff.
Naomi (03:25):
Yeah, I find that a lot
of things that I ask Chachi PD
to do and the recommendations itgives me.
From some of the vacations overthe summer that we've taken.
Half of the places are closedand it doesn't recognize that or
it doesn't exist anymore, orthe hours are completely wrong,
or they've moved locations andit just doesn't seem to update
that.
And then if I call it out andbe like, hey, this place
(03:47):
actually closed.
Oh yeah, you're right, let mefind something else, come on it
you can.
Karen Kranack (04:03):
you can actually
see on the screen that it's sort
of you can see what it's doingas it's interacting with like
resi or open table, and it wasmaking tons of errors, like it
was making a lot of mistakeslike it, but it was catching
itself for the most part andlike going back and like
retyping my name or selecting.
You know it's first startup isselecting like one person for
the reservation.
It should have been two.
So it was really underscoringfor me too.
Like it didn't make me thinkbadly about Resi or OpenTable,
(04:25):
but it did make me questionobviously you know LLMs and like
their actual abilities andwhere are we?
And then the third thing and Ithink makes this so important is
like I read an article in theWall Street Journal two days ago
that said there's recently beena study that proved that the
less literate a person is, themore likely they're to be awed
and wowed by ai and actually useit.
(04:47):
So it kind of brings to mindinteresting yeah, that old um.
What was the?
Uh?
arthur c clark like uh,sufficiently advanced
technologies areindistinguishable from magic yes
, like so literally, likeliterally, people the more they
think it's magical, the morethey are likely to use it.
(05:08):
And that brought up animportant marketing question for
me, which is should we betransparent if it reduces
marketing efficiency?
And I think, as human beings, Ithink one of the ways we can
distinguish ourselves frommachines is to be honest, right
and actually try to actuallyprovide truthful information for
(05:28):
each other.
So I think there's like thishuman element, but I also think
the awe kind of passes sort ofquickly because, like, once you
start using these things, youstart to get increasingly more
used to it.
It's kind of like television.
I mean, the people who firstsaw television were, you know,
blown away and then now it'ssort of like is a box that sits
in all our lives and just doesits thing whenever we want.
So I think technology is verysimilar in that way, and so I
(05:59):
think we absolutely shouldcontinue to be transparent,
because we're just going tocontinue to be bombarded with
brands themselves, too are kindof endangered.
Like thinking about, like theOprah example, right, like this
is someone who is trusted,brands are trusted.
What becomes that center oftruth?
And how do we support brandsand actually promoting
themselves as that truthfulentity.
Michael Hartmann (06:16):
Yeah, it's
interesting because I think
video is quickly becoming harderand harder to detect, um, but I
I've started noticing on Idon't know if it's on Facebook
or Instagram, but I get theseones that pop up and it always
begins with the same clip of JoeRogan from his podcast going
have you seen this one, jamie?
Go, pull this up, right, his,his cohost, and it's like wait a
(06:36):
minute, every one of the Istarted noticing like every one
of these is starting with theexact same clip.
I'm like this is not real,right, there's just like people
trying to take advantage of howwell known he is, um, so it's
kind of like that too.
I don't even that's an ai thing, but it's clearly like.
It's like it took me I don'tknow 10 to a dozen times of
(06:57):
seeing that to kind of likeclick and go.
Like wait a minute, I keepseeing the exact same clip.
It's weird, totally Okay.
So, all that being said, right,I mean, I don't know about you,
but like the organizations Ikind of interact with, I keep
(07:20):
hearing people like there'sactively people trying to adopt
AI into their organization andthey're rushing, but a lot of
them are stumbling.
Some are more public thanothers.
What are some of the lessonsyou've seen or learned from all
that?
Karen Kranack (07:34):
Totally.
It's interesting.
One of the things I'm reallyseeing as I work with
large-scale enterprise clientsis they have this intense need
to keep up with the Joneses,right, and so AI is top of mind.
I mean, literally when I gointo meetings, like you know,
one of the first questions I get, you know it'll be like the
entire agency there, where we,you know we do builds from soup
(07:56):
to nuts.
But, like, the first questionswe get are like what AI are we
including?
So I think every organization,every company feels this intense
pressure.
But what's interesting is thatit still takes time to build
those tools and it's reallyimportant to assess whether
these organizations actuallyneed these tools, and so that's
part of what my job is is tohelp come in and say you know,
(08:18):
is it the appropriate timeeither for a front-end
experience or a back-endexperience to improve workflows?
So what's interesting abouttechnology and wanting to keep
up with the Joneses is that alot of things appear to be table
stakes to clients.
So I'm hearing them say thingsto me like yes, we must
absolutely have conversationalsearch.
(08:40):
Yes, we must have next bestaction opportunities, you know,
opportunities for when peopleare looking at content or
products on our site.
But when I talk to them and tryto explain them about
technology, is that these thingsreally require thinking about,
like, what marketing platformsand what tools and technology
are required to actually buildthose, and it's not a question
(09:02):
of just plug and play.
Like you know, like AdobeExperience Manager, for example,
like doesn't have a plug andplay conversational search at
this point in time.
So, although they all wantthese things, they're actually
not necessarily that many thatare really great out there in
the marketplace and, although weneed to build and work on these
things, it's still it's still anascent technology, and it even
(09:23):
though, like you know, allwebsites have a search on them.
Migrating over to, say,conversational search isn't
necessarily, like, the simplestthing to do yeah, it's, it's
interesting.
Michael Hartmann (09:34):
I mean, the
thing that I've seen an
organization I'm at that's beenmost useful is implementation of
an ai tool that can be addedinto meetings when it's recorded
and it transcribes, and it doesa really good job of it.
I've been truly sort of blownaway at how accurate it is.
(09:54):
I may get occasional to meminor things like people's names
are misspelled, but that's aneasy miss.
Humans do that all the timewith my name, so I would hold no
grudge.
Easy miss.
Humans do that all the timewith my name, so I would hold no
grudge.
But it feels like beyond that,right, I haven't seen.
I'm starting to see more andmore, but usually it's led by
like a handful of people reallykind of taking, taking advantage
(10:18):
of it and sort of leading theway on what has worked.
So yeah, so we at the beginningwe talked about sort of like I
forget, was it sixconsiderations for building
trust in this world.
Like what did you mean by that?
Can you talk us through whatthat is?
Karen Kranack (10:37):
Sure.
So really, the first one iskind of what we've led with,
which is leading withtransparency and honesty, to
being really clear andforthcoming.
I mean, one of the things we'veseen in the marketplace, I mean
everyone.
I think the biggest story ofthe year was probably, like
Duolingo sort of statingpublicly or actually it was an
internal memo that got leakedbut stating that you know they
were going to start replacingtheir translators with AI, and
(11:01):
there was a public outcry aboutthis, especially around like
translation is actuallysomething artificial
intelligence can do really wellfor the most part, but there's
the emotional, human piece ofthis and the whole purpose of
Duolingo is obviously to providelanguage training for people.
So it really like, once thisleaked, like people were very
(11:21):
angry and upset and so therereally was a lack of
transparency and honesty aboutwhat Duolingo was trying to do
on the background, and I thinkthat's what really sort of
hampered them in that case.
So, thinking about things likethat, transparency and honesty
very important.
Secondly, clearly articulatingand demonstrating computer sorry
, clearly articulating anddemonstrating consumer value and
(11:44):
benefits.
You know, again, like goingbeyond AI strategy and use and
really thinking about like why,why would a person care about
this?
Why would your customer wantthis?
And so you know thinking aboutlike a problem.
Example was Air Canada.
Remember this I think this wasabout a little over a year ago,
(12:05):
but anyway, air Canada had acustomer who was interacting
with their chatbot and thiscustomer was trying to get a
bereavement fare and the chatbottold them that they could, in
fact, retroactively get abereavement fare.
They thought it was all good,they bought the full price
ticket and it turned out thatwhen they tried to do this later
(12:25):
, they could not.
And air canada said um, said oh, it's not our fault that the
chatbot told you that, eventhough it's our chatbot.
Naomi (12:37):
So not good again, like
poor customer value right, like
obviously you can't tell yourcustomer one thing and then
undermine it um in another wayso I do have like a follow-up to
that, you know, as, as we'retalking about adoption within an
organization, right, how do youbalance, um, you know, as
(12:57):
someone in marketing ops, right,how do you balance, like, the
speed to adopt?
Right, so, like for myself andthe folks on my team, like
there's all these tools and likefeatures of our existing tool
set that we want to implement,how do you balance that speed to
adopt with building sustainabletrust internally?
Right, because it's justexploded in the last, you know,
18 months?
(13:17):
Right, but trust, I feel, youknow, as we've been talking
about this year, hasn't reallykept pace, right, there's like
concerns about misinformation,data privacy, job displacement,
like you're, you know, there'sall these LinkedIn AI
influencers talking about howthey, you know, vibe coded a
platform in a week and itreplaced, like you know, their
(13:38):
entire team, you know.
So it's just like, how do we,how do you balance that in term,
internally?
And have you seen examples ofwhere a lack of trust
potentially could have, you know, derailed adoption somehow?
Karen Kranack (13:51):
Totally.
I have a couple of thoughts onthat.
The first is that you know I'mtalking to a lot of different
corporations and what I've heardfrom most people is that kind
of like what you're saying isthat a lot of the people working
in companies are trying outthese tools on their own.
So there's that piece of it butthat obviously undermines like
confidentiality potentially andlike if people are using this,
(14:12):
you know, sort of outside ofschool, so to speak, so that
that is a problem.
And what I'm hearing is thatthey're not a lot of people
employees are actually tryingthese tools and taking them back
to their employers and sayinglike hey, we should, we should
investigate whether we can adoptthis.
And they're getting a lot ofpushback at a lot of times.
And so it really seems to bekind of like the cultural top
down where a company orcorporation organization really
(14:33):
needs their CEO at the top tokind of be open to investigating
and trying these things.
Within our own organization,we're super into experimentation
, but we're also actuallycodifying and collectively
creating.
Actually our own group, thegroup I'm in is actually doing a
large portion of our work oninternal enablement, and how do
(14:54):
we actually, you know, pull inthese tools to improve project
management?
How do we improve our creativetools that we're using, our
marketing operations tools, ourmedia buying tools?
So, you know, it really isabout, like, I think, the
organization's maturity cycleand for those people who, for
the most part organizations knowthat AI is important and most
(15:18):
organizations are shifting Nowin highly regulated
organizations like, you know,banks or healthcare, I'm seeing
I'm still seeing actuallyenthusiasm for adoption, but the
rollouts are kind of slow.
You know where it might be like.
Oh sorry, all you get isMicrosoft Copilot because it's
already embedded in Office, theOffice suite which we already
have.
I'm not really seeing, I'm notreally seeing resistance, which
(15:40):
is super interesting, and thepeople that I've worked with,
even in the people who don'treally know are a little scared
of AI, are still open tolearning about it.
So I think there is, overall um, an openness and a willingness
to learn these tools and trythem and it's really about the
maturity of the organization andthe willingness to adopt.
Michael Hartmann (15:59):
So that's
skepticism about it right.
Karen Kranack (16:02):
Totally.
Michael Hartmann (16:02):
Yeah, yeah.
Karen Kranack (16:03):
Which is
well-founded.
You know it's like it, likeit's very.
I mean, obviously we know that,like ai tools are often used to
like train, you know like it'simportant, like I know the
enterprise models will actuallysay you know, like of gemini or
chat to like this.
This model is not training onyour corporate data, um, so it's
very important for for peopleto, for corporations to get
enterprise licenses.
Michael Hartmann (16:26):
Sure, yeah,
Okay, so you've covered, I think
, two of the six today, if mymath is right.
Yeah, totally yeah.
What are the other four?
Karen Kranack (16:40):
Okay.
So number three is proactivelyaddressing and managing privacy
and data concerns and explicitlycommunicating to people like
how is your data being used?
The big sort of, I think,stinker in the marketplace has
really been kind of meta,because you know their
algorithms are a black box.
Nobody really knows what'shappening with your Instagram,
facebook, you know WhatsAppaccounts.
Whatever their claims are, wedon't really know, and I think
(17:02):
that that has been reallyunderscoring.
You know a lack of like, trustversus, say, like Apple, which,
interestingly, is falling behindin the AI race because of their
emphasis on privacy.
So, like you know, apple toolsmainly are on device.
So, like, when you're using AItools on your Apple phone, it's
just kind of whatever's you knowpersonally identifiable
(17:24):
information about you is juststaying on your phone, whereas
you know, when you're using meta, you're obviously helping feed
the beast, as it were, and likehelping train all their data, no
matter what you do on it.
So definitely, proactivelyaddressing privacy is important.
Fourthly, committing to ethicalAI and actively mitigating
(17:44):
algorithmic bias.
Michael Hartmann (17:46):
Um, this, is
one that you know it's.
Karen Kranack (17:48):
it's it's been
scientifically proven and
well-received that you knowthere is bias inherent to these
systems.
Um, for example, I mean Amazonkind of got themselves into a
pickle a couple of years ago.
They had an AI based um tool, ahiring tool that would look at
resumes, and becausehistorically Amazon had gotten
(18:09):
more resumes from men than women, it was deprioritizing women
and if the resume said anythingabout, you know, women, like
they had been involved in, likewomen's sports or something, it
would actually kick out theresume and not put the candidate
forward.
So obviously they recognizedthis bias was happening and was
like a problem.
You know, so obviously andinteresting that I just read
about NVIDIA is using syntheticdata sets to reduce bias.
(18:33):
So because they know that therewas a lot of inherent bias on
the way that LLMs were trained,they're actually trying to come
up with synthetic data that ismore generic.
That will actually sort ofcreate a more level playing
field.
Michael Hartmann (18:48):
Very
interesting.
Karen Kranack (18:52):
Fifth,
emphasizing human oversight,
empathy and accountability,really just reassuring users and
consumers that human oversightremains, that things aren't just
going into a black box.
And someone who's doing thiswell in the marketplace,
especially for MarOps, isSalesforce.
Salesforce they're really withAgentForce, they're really
(19:15):
positioning their AI-based toolsas your partner and sort of
emphasizing the human element ina way that I think is really
kind of refreshing and that theywant you to be freed up to do
more of your tasks and reallyemphasizing like that empathy
piece, which is pretty profound,very interesting.
Michael Hartmann (19:32):
Yeah, I
thought it was.
Yeah, I saw it just today aheadline on something or a
Salesforce thing where theydescribed themselves as an AI,
AI based CRM platform orsomething like that.
I was like, oh, they're justgoing to throw AI in there into
the name of this version of whatthey are, Right.
Karen Kranack (19:48):
Totally.
I mean, I think all of thesemajor platforms are a little out
over their skis in terms ofwhat the AI can actually do, to
the degree that you know, sure,but yeah, and the last thing,
finally, is just reallyeducating and empowering
consumers.
You know, to like demystify AI,and you know part of that is
(20:08):
again giving explanations forwhen and where it's going to be
used.
You know there was a lot ofcontroversy over, you know,
midjourney and other, you know,image generation tools, just
sort of like siphoning inartists from all over the world
without any permission, right,and so you know that's really
undermining to people.
And so, you know, looking forways to actually say like, hey,
(20:30):
we are using this data to train,we're not, we're or we're not.
You know, or you know, likeUnilever had used AI called a
Unibot has a robust liketraining blueprint that like
sort of actually provides someeducation for people.
So, you know, letting peopleknow like this is why we're
using this.
It's not just some randomfeature either that we're just
bolting on also to just keep upwith the Joneses Cause that
(20:50):
that's not helpful either.
Yeah, that we're just boltingon also to just keep up with the
Joneses, because that's nothelpful either.
Michael Hartmann (20:53):
Yeah, so I'm
just curious because my
experience with AI being usedinside organizations has been
mostly for as one CEO I knowcalled it time dividend, right
so sort of leveraging that as atool.
So I think it felt like a lotof what you described in those
six things was mostly in thescenario where the tools are
(21:16):
used, either consumer-facing orclose to consumer-facing.
Do those same principles applywhen you think about it from the
standpoint of employees?
Are the customer in this case?
Karen Kranack (21:31):
Totally, and I
think employees have to like
really have a great reason forusing these tools.
We're doing a lot of and I'mpersonally doing a lot of
implementations of looking atinternal workflows, like,
particular, on CRM and helpingoptimize those workflows for
employees.
And one of the things we'refinding again like there was one
like major technology companywe worked with, um, you know
(21:52):
there were like basically like100 steps to kind of basically,
like you know, pushing theseemails out through this
marketing right, yeah, and oneof the things we found is that,
um, you know there might be.
There were literally like 30ways that it can be enhanced
with ai, but the other 70 werereally about improving human
communication, and so that's oneof the ways that we actually
(22:15):
support our organizations tohelp build that trust, because
we come in and we say you know,this is something that, like you
can be improving on, but it'salso about looking at your
actual workflow and your humaninteractions and people can sort
of then see that like, okay,you might be able to use this
LLM to sort of, you know,improve your generative engine
optimization so that you knowyou get better like sort of SEO
(22:37):
traction on LLMs themselves.
You know, and like.
You can use this little tool todo that, but it's not going to
like take away your larger job,for example.
Michael Hartmann (22:46):
Right, okay,
yeah, it makes sense, okay, yeah
.
So this is.
It's interesting because Ithink it'll be like these two
sort of scenarios like customerfacing or client facing or
whatever you consumer facingversus internal.
How that plays out Cause itfeels like a lot of the things
are similar, with slightly likethe nuances, is where you get
(23:09):
into differences, I think.
So I'm sorry, I'm sure you sawthe oh, do you?
Karen Kranack (23:14):
mind if I?
No, no, I think Totally.
I'm sorry, I'm sure you saw theoh.
Do you mind if I?
Michael Hartmann (23:16):
No, no, go
ahead, go ahead.
Karen Kranack (23:18):
Oh yeah, I'm sure
you saw the MIT study that came
out last week that said 95% ofAI projects fail.
But what they did say that doessucceed to your point is that
internal operations optimizationis actually the place to focus
more than the customer facingside.
So I mean that's something thatI am seeing as in the work that
I'm doing and that actually,like sort of you know, creating
(23:39):
those relationships with theinternal teams and then helping
them level up has definitelyprobably seen more success than
say, like some CPG saying, oh,we just need to stick a chat bot
on our website.
Michael Hartmann (23:53):
True, it feels
like MIT's got people out there
researching why AI is a badthing, because I think there was
a study I thought that came outof them Maybe it was Harvard,
so it's up there in Boston,right, that was like I think it
was.
They had done something wherethey had students come in and
(24:14):
use either not use LLM at all,use it as a, as a partner, after
writing something, and thenthose just used it to basically
generate and they basically saidit makes you dumber.
I mean, that was at least thatwas the shorthand that people
used, promoting it, and I waslike, well, so, first off, like
I don't know that it's a general, that it has a specific case
that's probably notgeneralizable, but also kind of
(24:36):
matches what I would expect tosome degree.
Like I don't know, like whenyou use it as a tool, like I
have found with my use ofCatchyPT, this is what I mostly
use Like, if I don't actuallyspend time to do a good job of
providing an input context andreviewing what it generates,
it's not great.
Naomi (24:55):
You know, michael, I used
to be amazing at math.
And now I'm like, okay, what's10% of my phone?
Right, it's, it's.
It's true, you don't use it,right, and you know we have all
these other tools.
But at the same time it's like,why do you need to do back on
on the napkin math anymore?
Longhand math right, yeah?
So you don't use it, you loseit, right?
Karen Kranack (25:17):
so it's true.
We just have to hope we don'tlose the, the technology well, I
think it's interesting.
Michael Hartmann (25:24):
So I I get
fascinated, this total, total,
total rabbit hole that we couldgo down to.
But I I get hooked into thesevideos of people doing like
survival type stuff, right, orwhatever, and I think that's
like part of that.
Is this like desire to notforget, like how to do things by
hand or without all thetechnology and tools that we
(25:46):
have, right I?
So I think I don't know what itis, but like like maybe there's
some I need some sort ofpsychologist to evaluate me but
like I just it feels likethere's.
I do fear that, right Is that?
I do think like, if all we didwas like just let AI stuff do
all this heavy thinking, then wewill lose some of that.
(26:06):
But I don't think that's theway we should use it to your
point, right, I mean?
I mean it's's like we should.
The calculators came around,you know, I mean we've abacus
was around before that and allthese things?
right, they're all disruptivetechnologies.
Um, tractors came around toreplace, you know, cows pulling
plows behind them.
Karen Kranack (26:26):
So, um, but I
think you're making a good point
about, like, not losingcritical thinking skills.
Right, I think that's the key.
You know, whether you'retalking about education or you
know the workplace, it's like westill need to be thinking big
picture.
We still need to be able topull things together.
They're recently saying I justread another article, too saying
, like you know, like taste,like human taste, is actually
(26:47):
something that, like you know,AI can't replicate.
Michael Hartmann (26:51):
Yeah.
Karen Kranack (26:52):
So you know, how
can we, Like you know, AI can't
replicate, yeah, so you know howcan we?
Michael Hartmann (26:53):
Well, I was
talking to somebody who's in a
like a journalism role and I wewere talking about something
else, but at the end I was likewhat do you guys like think
about it?
Because of the people I knowwho are like writers, actual
like that's their.
They don't, they're not reallyfans of it, but she said
(27:14):
something that was interestingin that, because they are in the
business of breaking news,right, new stuff like ai can't
really replace them.
There's an ai is dependent onother stuff to consume from
right now.
Naomi (27:24):
so right now interesting
stuff.
Michael Hartmann (27:27):
Yeah, right
now right right now, yeah, um,
right now, yeah, okay, so let'skeep going.
So we were talking about thecustomers versus internal, so
external, customers.
Let's stay focused there.
I mean, you touched on a fewmissteps, but are there major
(27:49):
themes of missteps thatcompanies are having when
they're introducing AI intotheir kind of customer
experience or customer facinginteractions?
Karen Kranack (27:59):
I think the
biggest missteps, honestly, and
you know, first of all, it'sexpensive to implement these
things and I think it's likethat sort of like willy nilly,
not putting a lot of thoughtinto it.
Like you know, we do hear fromsome clients like we need AI,
you know, and it's sort of liketo do what you know and that's
part of what trying to figureout, like, how do you really
serve that customer value?
And so I think the missteps arereally just like putting the
(28:20):
time, effort and money intosomething that's like not really
going to necessarily goanywhere, which is why I often
recommend just doing like aproof of concept or like a pilot
study, rather than like sinking, you know, like a ton of their
money into just overtly doingthings that may not matter.
The other piece of this, too,is that, again, it takes a lot
of effort to, like, you know,you can't people think that like
(28:41):
sure, we have a search engine,now we can just, you know, plug
and play, conversational search,but the truth is you need to
actually, you know, set up thatLLM, train that LLM.
It needs to observe theinteractive behavior over time
of customers, and so I thinkthere is this misunderstanding
in the minds of some clientswhere they think that, well,
(29:01):
search is table stakes.
Therefore conversational searchshould be table stakes, and I'm
seeing that quite a lot, whereit's sort of like this sort of
misunderstanding of just thetechnology itself, where we are
what, the abilities of thattechnology and then the time it
takes to still make really greattechnology.
Which kind of brings me back towhat you were saying about.
Like you know, you have toactually put a lot of thought
(29:22):
into how you're engaging with AI.
You can't just say like, writeme a poem and expect to get you
know as good of a poem as if youlike had something you know
that you were giving it as aframework to cope, to move from.
Michael Hartmann (29:34):
I'm laughing
Cause my first sort of playing
around with chat GPT was to getit to write rap lyrics for.
Karen Kranack (29:40):
Nice.
Michael Hartmann (29:43):
That was a
long time ago, feels like it
feels like forever ago.
So I'm curious is one of thethings cause I know again I we
talked about this before westarted going live here.
But there's an organization Iwork with where AI is being
really embedded into the waythat the company works and it's
from the top down, but there'sdedicated resources for it too,
(30:04):
to try to drive that adoption.
So are you finding anydifference?
Are you seeing that much withyour clients or people you're
talking to, and do you see itmaking a difference in the
success rate?
Karen Kranack (30:20):
I see a huge
difference and it's interesting
because it kind of depends on so.
Especially, like the highlyregulated companies, they, if
they really sort of you know,put forward a dedicated like
staff to work on that.
I'm seeing them look at verysuccessful use cases.
You know, like, say, this isnot a client.
But I've read about, like youknow, like a mortgage lender,
(30:41):
creating a chatbot that canactually, you know, access
secure data without actuallypulling that secure data like
into the chatbot responses,right, or whatever, so it can
sort of relay them, but itdoesn't train or stay, like with
the chatbot.
So things like that are verysuccessful, like that are very
successful.
But you know, there are casestoo where, if you don't get I
(31:04):
think it really is about theorganizational maturity and
organizational buy-in and, youknow, are they willing to invest
in the time and the people tosolve those internal problems?
If they're not doing that, ifthey think it's a bandaid, if
they think like, oh, we justneed to throw something up on
our website, that's not going towork.
Michael Hartmann (31:22):
Yeah, that
makes sense.
So I want to go back to theinternal folks a little bit,
because I do think there's yougot a real mix of people.
So when you're rolling thingsout internally, what should be,
what's your guidance for peoplein terms of getting the best
(31:45):
chance of success when they rollout and get adoption of AI
systems internally?
For, like I said, the time timebenefit there.
So and if you've got anyexamples, that'd be great.
Karen Kranack (31:59):
Yeah, success
really comes with, again,
looking at those workflows, likethat sort of marketing CRM
workflow I talked about Like youknow.
Success there was again liketalking to the people.
It's funny, you know, I'vetaught qualitative research
methods and it's still one ofthe most important parts of this
job.
You know, actually I waslistening to another one of your
(32:19):
podcasts and it was you knowabout like, doing this work is
almost kind of like psychologyand being a therapist in a way.
You've really got to bring inthe human element and really
talk to people and that's how Ifound the most success is that
talking to clients and sittingdown with them and literally
mapping out their workflows withthem.
(32:39):
I do a tremendous amount ofthat mapping out out and they
can see what that looks like,they give input to that and they
understand like, oh, this is,this is very clear, and I think
it's about providing thatclarity is what provides the
most success for clients.
I think no one.
You can't just say to a clientor anyone really like, go adopt
adobe experience manager for thecloud and then figure it out or
(33:03):
like or um.
I actually did a keynote talk atAdobe about Adobe experience
manager or gen studio um forperformance marketing and, um,
you know it was hands-on, it wasa hands-on learning session,
and I think that's the reallyimportant kind of work that we
need to be doing is that kind ofeducational piece.
Um, and I think that's where umof work that we need to be
doing is that kind ofeducational piece, and I think
(33:23):
that's where, um, you know, Ipersonally provide the most
value, because it is about likecoming in, working with people
being very one-on-one,collaborative, and then you get
the organizational buy-in andthen those folks that you're
working with can then, you know,obviously cascade that out to
the other employees that need tolearn yeah, yeah, I know, naomi
, that probably she was probablyreacting like to the idea that
(33:48):
you just roll out technology fortechnology's sake, Cause I know
you still measure like adoptionon your technologies.
Michael Hartmann (33:55):
For sure, yeah
, yeah.
Naomi (33:57):
Because, well, especially
if, like, there's tools that
are like user-based licensing,right, you know, you kind of
need to be able to make surethat that doesn't get overlooked
.
And I think, just, I feel likewe could have like an entire
episode just based on likeinternal trust, employees and
adoption.
(34:17):
Technology adoption how do wechange like change management,
all of that stuff?
Technology adoption how do wechange like change management,
all of that stuff.
And you know, I think the thingthat I have been finding that
I'm coming up against rightright now, um, internally, is
there.
I think it's like it would beinteresting if there was like
some kind of playbook for, youknow, building confidence with
your colleagues who feel unsureabout AI or feel threatened by
(34:41):
it, or, like you know, ifthere's people who are digital
native employees versus thosewho maybe technology is not the
strong suit, and it's kind ofthis like threading the needle
of like how can you know and Iasked myself and the team about
this is like how can we, asmarketing ops leaders, how do we
(35:04):
champion adoption internallywithout making it feel like it's
some kind of forced compliance,because I just never think that
that's going to turn out in theway that you want it to the way
that you want it to, right.
(35:24):
I almost feel like every time Iuse the word AI now internally,
that I just never really knowthe reaction I'm going to get,
and so something that I've beenthinking about as well is
putting together an AI bootcamp,almost like an internal
training session for the peoplethat I work with on a very daily
basis, to make sure that we'reall on the same page.
Right, because it's very easily, you know, it's something that
(35:45):
I consume and learn about quiteextensively, but that knowledge
gap needs to be bridged, right?
Yeah, absolutely.
Karen Kranack (35:53):
And you can even
do that with something fun to
start.
You know, I can be, like youknow, learn to do like a mini
vibe code session with Claude,where, I mean, I've done this
with people where, like, it'sliterally like, pick something
you like, like, or just you know, like, uh, I had it make a
crossword puzzle, interactivecrossword puzzle based on the tv
show severance, you know, orsomething like um just getting
people to kind of get familiarwith something enjoyable totally
(36:16):
yeah yeah, I mean I think itwould be good to do.
Michael Hartmann (36:20):
I know as an
individual I've tried to do
things like.
I've gotten like pretty.
I feel like I've gotten Iwouldn't say I'm an expert,
because I don't know what thateven means for like using
ChatGPT, but I feel like I'vegotten pretty darn good at it as
an individual.
I haven't tried the ChatGPTagent thing, but it's on my list
.
I've tried other things andit's just like I did one the
(36:44):
other day where I wanted to.
I had an audio track forsomething and I wanted to
transcribe it.
I was like, oh, there's got tobe a free, low-cost option for
that, and there is, but itrequires coding with Python to
use an AI or API.
I'm like I'm not going to dothat.
So I tried something else witha kind of drag and drop solution
(37:06):
and it worked great.
But I I'm on a free level wentright through all my AI credits
like in no time, and I was likethere's like that's the part for
me Like I want to try doingthat stuff, and if I had like a
company that was willing tosupport that and spend a little
bit of money to try it, I canthink of lots of things that
would be super valuable, Right,and that's.
(37:27):
That's the part I, like I'mstruggling with.
I want to learn, I want to tryto do more, try not to spend a
lot of money on it as anindividual.
So if you've had anysuggestions for where people can
try to do some of that or howto convince their organizations
to do it, I'm all ears.
Naomi (37:45):
Can you use chat GPT?
Sorry, I was.
I was saying like can you usechat GPT to like learn about
chat?
Karen Kranack (37:54):
Right, yeah, I've
done that.
Naomi (37:55):
Yep.
Karen Kranack (37:58):
Totally, and I
think one of the things that
we've done is tried to collect,like we've been, as we have in
our internal enablement team hascollected data on, like asking
people, like just surveysinternally.
What are you using and thengoing out and trying to get some
licenses for people to try it,maybe like the most popular
tools?
Michael Hartmann (38:46):
no-transcript
llms.
Uh, you know you can't cheat,um, which is interesting, my son
, who's in high school.
We just got through hiscurriculum and they're actually
specifically calling out likewe're going to tell you when and
how you're allowed to use thesetools as part of work, which I
(39:10):
think to me that's a much morereasonable one.
What I worry about for my two,who are in college, is that when
they come out because they weretold not to use it, they won't
have that experience, but allthese companies are going to be
expecting it.
Karen Kranack (39:26):
Interesting, are
they?
What I tend to see in theuniversity level, and having
taught at the university level,is that it's it tends to be
instructor based, like whetherthey could use it or what degree
.
So, like what I'm.
What I've just basically seenis that, to your point, it can
either be banned entirely.
There can be a hybrid approach,which is what I personally am
used.
We're like you know so.
So I teach qualitative researchmethods Right, and one of the
(39:47):
great things about qualitativeresearch is that you can't fake
it Like.
You actually have to go out andtalk to people, so, but I I do
allow my students to use LLMs tolike analyze that data or at
least, you know, get some ideasor thematic concepts like from
that data using LLMs.
But they have to tell me andprove to me like how they did it
, like, so you can't just likego up and do it and you know,
(40:08):
turn in a paper and say that'sthat.
And then, of course, there'sthose who have unrestricted, I
think for anyone these days.
It's funny.
I Googled, I was thinking aboutthis question.
I Googled yesterday, like Gemini, what do you think like should
(40:28):
you know what is the best pathforward for students?
And it said a hybrid approachis the only way forward, or
something.
You know what I'm saying.
Like, of course, the AI is likeyou must use AI, but I think I
think that the reality is.
It is a hybrid approach and Ithink that I think the most
important thing is teachingpeople critical thinking skills,
and that has to come both, Ithink, from like parents and
(40:48):
also from the schools themselves.
And that's like even thinkingabout and this is getting
esoteric, but like you knowlogical fallacies, like you know
what is a red herring argument?
Like what?
What is a straw man?
You know like what are these?
and sort of starting to thinkabout, like, when you're using
these tools, are you gettingback content that actually makes
sense?
And then you know for yourself,always saying like your
(41:10):
students, saying, like I have tostart with like a draft of my
paper, I cannot, I can't, askchat gpt to start the draft of
my paper.
It might be able to help me,like brainstorm ideas, but it
can't be the main tool that I'musing and I think it's
unfortunately something thatthat kind of has to be
investigated and taught.
It's very, I think, humannature like pulls us towards a
(41:31):
bit of laziness sometimes.
You know, kind of like sure,I'll just get rid of that.
But I think I think youngpeople today do have to learn to
use the tools, and I think youknow, maybe using them in very
specific contexts, like teach mehow to code, you know something
that will analyze this data, oryou know, like take my data and
turn it into graphs.
You know graphs and charts thatI can then put into my
(41:53):
presentations, like sort of our.
I think it's about looking atthings in manageable chunks and
also still getting you know.
I'm hearing about too, likeinstructors.
You know doing oral tests, oryou know like it's like you can
turn in a paper, but then youalso have to do like sort of an
oral argument around your paper,for example.
So I think it's about you knowit is.
It is probably a hybridapproach, yeah, but there's no
(42:16):
easy solution.
It's actually, I think,actually going way back to my
Oprah example.
It's actually, I think,actually going way back to my
oprah example.
It's almost like when you'reworking with any of these tools
is that can you take a pause andbring more consciousness to
what you're doing and how you'reinteracting with these tools,
um, and really put some thoughtand care into it, which I'm not
(42:37):
sure how the exact way to liketeach that is, but other than to
model it right I think it'sinteresting.
Michael Hartmann (42:42):
You've at
least twice in this conversation
brought up the term criticalthinking and it's um.
I have a big criticism of theeducational system, at least in
the us, you know.
I mean I'm not gonna sayanything about canada, um, but I
think our educational systemdoes has kind of lost that as a
core skill right, it's moreabout compliance and testing and
(43:04):
things like that, but it'sinteresting.
Today I was in the carlistening to a totally different
podcast, totally differentthing, and the guest was someone
and I forget the name of it.
She started a school that isusing AI agents.
Like truly.
It sounds like I haven't seenit Right, but students get only
(43:28):
two days, two hours a day, withusing some AI tools to do their
their core curriculum stuff andit meets them where they are,
regardless of where they are inthe grade, and then they have
what the teachers are not liketeachers where they're up there
lecturing.
The teachers are like coaches,more like coaches.
And I think it was interestingbecause she also brought up the
idea like critical thinking.
(43:49):
She like repeated that over andover when she was talking about
the way they do it.
It's like they still want, likethey're focused on teaching
that and like discerning it andthey're not just like opening up
chat gpt for these kids andusing it.
They're it sounded like they'reusing some like, uh, bespoke
solutions, that kind of picksand chooses, the engines behind
what they're doing for eachstudent.
(44:10):
But I thought it was reallyinteresting.
This is like the first trulylike time I've heard about ai
being embedded into educationwhere I thought like that's,
that's really out there.
That's also a really expensiveschool, so it's not like
scalable.
That's part of the challenge.
Karen Kranack (44:31):
I mean it sounds
amazing.
I have not seen that like inaction at like the university
level, but I'm sure that thereare some professors out there
who are doing something similarto that.
But I mean, I think that'swhere we want to get to, and I
think, think that's where wewant to get to and I think that
that's yeah, it's an amazing.
Michael Hartmann (44:43):
I'll have to
look for that I mean you're,
you've got one is all the waylike way early on.
Are you seeing anything like?
Naomi (44:50):
that.
Yeah, I mean, she just starteduh junior kindergarten today.
Yesterday was her first day.
Nice, you're right, I know.
Michael Hartmann (44:58):
Uh, no, not at
the age of four so, but just so
this this woman talked about,they have kindergarten kids who
are doing that, five year olds,so you know, I'm just I'm trying
to.
Naomi (45:12):
It's interesting.
As someone who works in techand uses devices like a little
bit too much, I'm very much okaywith not giving her a single
device for as long as possiblewe did the same thing with our
single device for as long aspossible.
We did the same thing with ouryoungest, you know, as long as I
possibly can anyways, yeah,yeah, there's, you know,
especially after sorry, no, Ithink it's smart, sorry, yeah,
(45:33):
no, especially after readingthat, that book, the Anxious
Generation.
Michael Hartmann (45:37):
So I just
started listening to it and not
reading it.
Naomi (45:40):
Yeah it's.
It's actually a bit you know.
I'd love to talk to you aboutit after you're done.
Michael Hartmann (45:45):
Yeah, well for
me.
I grew up at the point where alot of that stuff he says
started right.
Yeah, the cusp of that.
Karen Kranack (45:54):
Yep.
Well, one of the things to thispoint is you know I'm sure
you've read about theunfortunate, the suicide that
occurred, the chat, gpt, openaibeing sued, and I think that
this is one of the things that'sreally important to also help
young people, younger peopleunderstand is that it's not a
person you know, and it feelslike a person, it acts like a
person, sort of it seems like aperson but it is not.
(46:17):
And you know really helping,because obviously I saw that
OpenAI is now going to institutesome parental controls or
whatever.
But what we know about LLMs isthat they really can be
jailbroken really easily, likedepending on how you talk to
them.
Like they don't always adhereto their guardrails.
It's really kind of like not atotally safe space, and I think
that's something that we alsoneed to emphasize.
Michael Hartmann (46:41):
Well, and I
think the more they act kind of
like human.
There's a post today onLinkedIn that I saw where
someone said there's research Ihave not gone to validate it so
I won't vouch for it but it'sbasically said that some people
did some experimentation wherethey put like LLM based engines
as people in a a system that waslike a social media platform
(47:08):
that's intended to likegravitate people towards like
controversial stuff, and and itactually led to like this really
divisive thing with a bunch ofchatbots right.
So like it's kind of likeyou've got that plus these
engines that are designed tokeep you engaged for as long as
possible, and so, um, yeah, it's, it's going to be an
(47:30):
interesting time over the nextfew years.
I was going to say the nextfive to ten years, but I suspect
that the window of time isactually shorter than that.
Karen Kranack (47:37):
So yeah, that's.
Am I allowed to say like Iactually think no one should be
on social media at this point,but yeah, totally, I think you
can.
Michael Hartmann (47:49):
I think that's
okay.
I would love to.
There was a point in time whereI deleted stuff off my phone
and then I slowly added it backon and I, kind of on a daily
basis, like what am I doing?
Yeah, so, uh, yeah, cause, likethe danger is that we've talked
(48:10):
to other kids like don't let itcontrol you, you control it.
Right, that's the, that's thething we need to do.
Um, karen, lots of fun.
Uh, I don't think, even think,we scratched the surface of what
we could have covered with you.
So, thank you so much forjoining us.
If folks want to learn moreabout what you're up to and what
you're doing, what's the bestway for them to do that?
Karen Kranack (48:32):
They can catch me
on LinkedIn.
So just Karen Kranick.
K-r-a-n-a-c-k.
Michael Hartmann (48:43):
Yeah, got it
OK, good, perfect, all right.
Well, thank you again, karen.
Naomi, always great to have you, so thank you for joining um.
Thanks to our listeners andaudience out there.
We appreciate your support.
If you have ideas for topics orguests or want to be a guest,
you can always reach out tonaomi, mike or me, and we'd be
happy to talk to you about thattill next time, bye everybody.
Bye everyone thank you.