Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
John (00:00):
Those who put their head
in the sand are the most likely
to be disrupted.
I think there is going to beplenty of opportunity for those
who understand generative AI,understand how to use it as a
tool to augment their ownability, are going to be
well-positioned to continue tosucceed.
Amith (00:17):
Welcome to Sidecar Sync,
your weekly dose of innovation.
If you're looking for thelatest news, insights and
developments in the associationworld, especially those driven
by artificial intelligence,you're in the right place.
We cut through the noise tobring you the most relevant
updates, with a keen focus onhow AI and other emerging
technologies are shaping thefuture.
No fluff, just facts andinformed discussions.
(00:41):
I'm Amit Nagarajan, Chairman ofBlue Cypress, and I'm your host
.
Mallory (00:46):
Hello everyone and
welcome back to the Sidecar Sync
podcast.
My name is Mallory Mejiaz andI'm one of your hosts, along
with Amit Nagarajan, and todaywe're excited to be bringing you
a special interview edition ofthe Sidecar Sync.
We're talking to John Huseman,who's an associate partner at a
leading global managementconsulting firm, where he works
with some of the largestorganizations in the world, and
(01:07):
you might be thinking what doassociations have in common with
some of the biggest companiesin the world?
It might be more than you think.
I'm going to share a little bitmore with you about John's
background after a word from oursponsor.
If you're listening to thispodcast right now, you're
already thinking differentlyabout AI than many of your peers
, don't you wish there was a wayto showcase your commitment to
(01:28):
innovation and learning?
The Association AI Professional, or AAIP, certification is
exactly that.
The AAIP certification isawarded to those who have
achieved outstanding theoreticaland practical AI knowledge.
As it pertains to associations,earning your AAIP certification
proves that you're at theforefront of AI in your
(01:49):
organization and in the greaterassociation space, giving you a
competitive edge in anincreasingly AI-driven job
market.
Join the growing group ofprofessionals who've earned
their AAIP certification andsecure your professional future
by heading to learnsidecarai.
As I mentioned, john Huseman isan associate partner at a
(02:11):
leading global managementconsulting firm, where he has
spent the last eight yearsadvising clients across
industries.
He specializes in digitaltransformation, technology
strategy and operating modeldesign, with a focus on
generative AI.
Before consulting, johnco-founded Rodolo, a software
consultancy that built custombusiness applications,
emphasizing user experience forunderserved business units in
(02:33):
larger enterprises.
He holds an MBA from the FuquaSchool of Business and a BS in
business administration from UNCChapel Hill.
If you feel lost in yourgenerative AI journey as an
association, I think this willbe a great listen for you.
We cover a lot of territory inthe interview, but to me, one
point stands out we must shiftour mindset from we need to
(02:55):
implement generative AI becauseeveryone else is doing it to we
have specific business problemsthat generative AI could help
solve.
Enjoy this interview, john.
Thank you so much for joiningus today on the Sidecar Sync
podcast.
We are so happy to have you.
I'm hoping, first off, you canshare a little bit about
yourself and your backgroundwith our listeners.
John (03:18):
Yeah, absolutely.
And to me, Mallory, thank youfor having me.
Yes, my name is John Huseman, Iam a management consultant, and
so a lot of my time for thelast call it seven and a half,
eight years or so has focused onworking with large companies
think Fortune 50, fortune 100,or portfolio companies of large
private equity firms of solvingwhatever their most pressing
(03:39):
challenges are.
This has been, let me, have avariety of different industries
and capabilities.
However, the last several years, most of my time has fit in the
consumer goods space and,likely most pertinent to your
audience, on technology, and sothis is everything from
technology strategy or digitalstrategy operating model to what
(04:00):
I think is probably moreexciting to talk about in AI and
specifically generative AI overthe last few years, so that's
where I spend most of my timetoday.
Prior to that, in a previouslife, I helped start up a
software company that builtcustom business applications, so
, while I'm not a nativeengineer, I know a good decent
(04:20):
amount about how products arecreated and how software is
developed.
Mallory (04:23):
Awesome, awesome, and
you were just sharing with me
before we started recording howyou and Amit know each other.
John (04:28):
If you want to talk a
little bit about that, yeah, so
that aforementioned startup waswith Amit and I working together
.
So that was back in New Orleans, gosh, probably about a decade
or so ago now, which makes mefeel old, but yeah, we got to
work together for several years,which was a great experience.
Mallory (04:45):
And were you all
talking about artificial
intelligence 10 years ago or notquite?
John (04:49):
I think Amit was
questioning my actual
intelligence more a decade ago.
No, but actually it's funnybecause what we were doing a lot
of is that was early in days ofcloud hosting, and so I
remember that when we talk aboutnew technologies and adoption,
a lot of what we had to talkabout our clients then was hey,
are you comfortable hosting thisthing called AWS or the cloud,
(05:12):
and what is that?
And I'm much more comfortableon-prem, and so it's a different
technology, but a similarparadigm of hey, this is a new
technology that is disruptinghow things have been done in the
past.
This is a new technology thatis disrupting how things have
been done in the past, and howdo you help?
Mallory (05:34):
people get comfortable
with it and learn how they can
use it in whatever else they'redoing to make their lives better
and more productive andhopefully help the bottom line
as well Do you feel like yourperspective on technology
implementation has changed fromyour startup days to now working
with large organizations.
John (05:47):
Yeah well, it's far easier
with startups in many ways.
Right, you just have an ideaand you go do it.
A lot of my time today is spentless so on the answer, the
technology itself, but thecultural and political issues
that exist within largecompanies, right, and so,
whenever there's somethingthat's disruptive, that scares
people, and so helping them getcomfortable with how this can
empower them, how this canenable them to do things they
(06:09):
haven't done before, and a lotmore time with legal departments
than I used to, who are veryworried about how this
technology will work, and so Ithink that's a burden that large
companies deal with thatsmaller companies don't have to,
which, in many ways, whilethere might be more resources at
a large company, thoseconstraints can actually slow
them down and allow smallorganizations to really move a
(06:30):
lot faster and experiments andtrial things and see what works
and what doesn't, which canoften be a pretty big advantage.
Mallory (06:38):
Yep, Amit, I would say
you agree with that right.
Amith (06:41):
Yeah, I agree with all of
that.
It's fond memories andrecollections of what John's
describing with the startup herein New Orleans that we were
involved in together andultimately that company had a
nice ending and it was sold andit was a great experience along
the way, and it actually was anincubator that spun off a whole
bunch of other softwarecompanies, which was really
(07:01):
really cool as well.
And one observation I'd sharewith our audience, just to kick
things off from my viewpoint, isI've known John for a long time
, as you just learned, and Johnand I recently actually
reconnected.
We hadn't chatted in a fewyears until probably three
months ago or something likethat, and we had this great
conversation all about AIadoption and John's practice
(07:23):
experience.
He's quite modest in thisinterview.
He's doing some very high-levelwork for some of the biggest
companies in the world, workingwithin one of the leading global
management consultancies, andhas some very impressive things,
and I thought to myself it'd bereally great to get John to
share some of his experiences,both the exciting parts and also
some of the challenges, andwhat he opened with there in
(07:43):
terms of the difference betweenhis current life and what he did
in the startup land, isactually kind of similar to what
associations deal with, eventhough they're much smaller than
the kinds of organizations John, you're dealing with now.
They deal with a lot of redtape, a lot of bureaucracy, a
lot of layers of volunteerleadership, governance boards,
bylaws sometimes.
So we'll get into that, I'msure, in this pod sometimes.
(08:06):
So we'll get into that, I'msure, in this pod.
But working through the humanfactors and the change
management, I think it alwayshas been the biggest challenge
and opportunity with technologydisruption and it still remains
to be the case, in my opinion.
Mallory (08:16):
Well, I want to jump
right into the juicy stuff.
So obviously we're here to talkabout AI and the clients that
you've worked with.
When do you feel like youstarted seeing AI make a real
impact in companies that you'reworking with?
John (08:28):
Yeah, so I think it's
important to separate AI and
generative AI, and so AI hasbeen a big part of clients for
decades and certainly as long asI've been a consultant and you
see it a lot in terms offorecasting algorithms, pattern
recognitions or opticalrecognition, like that's been
around for a while, sort of theAI and the machine learning
(08:51):
realm, the generative AIs or thenew sexy stuff.
Really, our last few years iswhere, you know, I think it was
probably what November of 22,something like that when OpenAI
first launched Chachi BT3, whichis where I think most people
started becoming aware of this,and obviously it's funny.
We sort of complain about thelimitations, but it's kind of
crazy.
This sort of came from nothingto where it is, you know, only
(09:13):
in a matter of a few years.
It's been pretty rapiddevelopment.
But where I'm seeing companiesuse it is really trying to use
it everywhere they can, and sofor my clients there are really
three primary ways they sort ofbucket to use cases or
applications.
There are things and again thisis for the consumer product
(09:33):
space, but I think it's probablyrelevant to most industries
there are how do I better engagewith my end consumer and have a
more personalized relationshipwith them in a way that the
consumer finds valuable.
I am not just forcing it onthem, but they actually have to
find value from this.
There is how do I work with mydistributors or retailers for
this middle layer that connectsmy products to the end consumer,
(09:56):
make them more effective,increase their satisfaction?
And then, third, how do I justhelp my own employee be more
productive, be happier, improveretention things that is really
relevant to any organization.
Those are really the three mainchannels that we look at.
(10:35):
There can be a trap when a newtechnology comes, especially
when, like generative AI, oflike, well, let's just start
building stuff and see whathappens, which is kind of cool
and demo, but you can't reallysee it on the P&L, you can't
really see did it actually drivea meaningful impact?
And thinking more in terms like, okay, well, how do we use this
tech to actually change how wedo the business and change how
we run things?
And one way we I sort of helpmy clients think through this is
(10:58):
not just focusing on what'stoday, but have these sort of
big, bold, ambitious bets thatmight be four or five years down
the road, but where are youtrying to go with this.
And so in the consumer space,that might mean, hey, rather
than having one or two or threecampaigns going in any given
time, what if we have 8 billionand we have a personalized,
completely unique campaignthat's bespoke for every
(11:19):
individual?
That's not something that wecan easily do today, largely
just because of data issues, butthat could be something we want
to go, and if you have thatsort of ambition, then you can
think through like what are thestepping stones over the next
several years to help get usthere?
What capabilities do we need?
What talent do we need?
And so that is an area where Ithink as many companies are
starting to think, hey, we'respending a lot of money on this,
(11:39):
what do I have to show for?
It is starting to be a littlebit more structured and
thoughtful about where are wetrying to go to actually see
those results, whether it befinancial or however else.
You're measuring rather thansort of where it has been up to
this point, at least in myexperience of let's build
something and figure out thebusiness case later.
Amith (11:58):
Yeah, it's really
interesting because I think
those three categories aredirectly relevant to
associations and the nonprofitsector.
I mean the one that youdescribed.
Obviously the end customer, theend consumer, it's the same
thing, it's the member, it's theconstituent, but in the other
two and obviously they havestaff, they have employees which
I would also include kind oftheir close in volunteer
leadership that contributes alot of energy.
The middle layer is interestingbecause in the context of CPG,
(12:21):
you might be dealing withdistributors and wholesalers,
retailers, all that, the fullsupply chain to get to the end
consumer.
In the case of associations,you typically aren't dealing
with that kind of a scenario,but I still think there's
lessons we can learn from someof your experiences there,
because associations dodistribute their value through
partners in a lot of cases,sometimes through partnerships
(12:42):
and affiliates, sometimesthrough things like the context
of chapters and structures likethat.
So ultimately, I think there'ssome really good parallels.
So I just want to quickly drawthose for our listeners so they
understand, kind of how yourworld might relate to the way
that they're thinking aboutthings.
Mallory (13:01):
Do you feel like any of
those buckets have easier wins
than others in terms of theengagement piece, the middle
layer that you mentioned, andthen employee productivity, just
one that you always like tostart with?
John (13:14):
I think the employee
productivity or anything
internal.
You just have more control overright and legal departments
stress less about.
Whenever you're actuallydirectly engaging someone
outside your company, you atleast have.
You have to be very carefulthere.
There's all sorts of sort ofethical issues.
You need to make sure that it'snot saying something you
wouldn't want to say, protectingthe brand, and so that just
(13:34):
gets more challenging in thatyou just have to put a lot of
thought into how you're going tomitigate any potential bad
actors or bad outcomes, whereasyou obviously don't want to give
bad information to youremployees.
But it's more of a containedrisk than it is when it goes
external, and so oftentimes weencourage our or my clients to
(13:54):
start there, both because it isa little bit easier, it feels a
little bit less dangerous.
It also allows them to helpbuild excitement and engagement
throughout the organization.
That it's one thing to hearabout.
Hey, we're doing this coolthing that helps these consumers
and obviously I want consumersto be great, but I'm in finance.
I don't really ever see themVersus like, oh, this is
something I can actually see andtouch and it helps me make my
(14:16):
job easier or do something Idon't want to do.
That's pretty exciting.
And to Amit's point earlier on,the change management aspect of
here, a big part of the battleis just getting buy-in of.
Like this is something that isuseful to me and it's worth me
putting up a little pain oftrying it out and recognizing
that new tools have bugs andthey don't always work as we
thought, but getting those winsstart building that advocacy and
(14:47):
building those champions andgetting people just able to use
and see how it can help them isoften a big part of the initial
battle.
Amith (14:51):
John, how frequently do
you find in these larger
organizations there's resistancefrom employees due to fear?
John (14:56):
of job loss Quite a bit.
It also depends on what the usecase is.
I think it's less so onspecific use cases we are sort
of discussing, because often wetry and develop it with the
teams and more of what they'rereading in the newspapers and
you see all these headlines oflike 70% of white collar jobs
disappearing and people worryabout that, and this is not new
(15:20):
Korea of destruction has been abig part of our economy for the
last century, if not more, andso I think there is recognition
that there will be disruptionhere and that worries them.
I think where I tend to landand this is not a unique view to
me is that those who put theirhead in the sand are the most
likely to be disrupted.
I think there is going to beplenty of opportunity, at least
(15:43):
until the robots completely takeover, but plenty of opportunity
for those who understandgenerative AI, understand how to
use it as a tool to augmenttheir own ability, are going to
be well-positioned to continueto succeed wherever they go.
This is just, I think, a littlebit like 25, 30 years ago,
people who refused to use theinternet.
They said this is a fad, or Idon't use email.
(16:05):
It does replace part of yourjob, right, like you'd spend
less time on research.
You'd spend less time on, maybe, writing letters, whatever the
case is but it ultimately allowsyou to do a lot more things and
it augments your ability to befar more productive.
And so that's the message wetry and get across of how are
you going to use this technologyto upskill yourself and allow
you to be actually a much moreproductive and attractive
(16:27):
employee wherever you go, andrecognizing that some things
will go away we don't havetelephone operators anymore and
I was a huge employer, you know,50, 100 years ago developers,
and we have people who creategenerative AI, and so there are
always sort of an upskilling andincreasing of what people are
working on.
(16:47):
But it is important to helpcontextualize that and frame
that and have people understandwhere they fit in.
Amith (16:52):
One of the things we
think associations have a really
critical role to play is in thearea of educating their
professions on AI, so certainlyeducating their teams on how to
use AI.
And you know it goes back to Ithrow this out all the time on
this pod that I think there'sgonna be two types of people in
the near future the people thatare natively knowledgeable,
(17:14):
trained up to speed ongenerative AI or AI in general,
and there's going to be peoplewho are unemployed, and so
that's pretty much it.
And so and I reason I say itthat directly is we're trying to
get people to learn this stuffright, just to first of all get
awareness of what it can do andthen learn a little bit of it.
But you know, I think a littlebit about the context of uptake
(17:34):
of how people do this within theindustries you work in, and so
every industry has associations,sometimes multiple associations
.
Sometimes there's associationsfor little slices of the
industry or different sides of asupply chain, for example, but
I believe that they all have anopportunity and a responsibility
in a way to train theirprofessions on AI, to better
(17:55):
disseminate this information.
What I'm curious about is,within your practice and the
clients you work with, what'sthe uptake in terms of learning?
If you take a broaderpopulation of employees, how
many of them have actually beenoffered AI training?
How many of them have takenthat up?
I don't know if you have roughideas of any quantitative
metrics on that.
John (18:15):
Yeah, I guess I would
share directional metrics with
this.
So a lot of my clients that hasbeen a big push over the last
year Like how do we upskill andtrain our people on this tool
One to use the tools we'regiving them but, as importantly,
for them to come up with theirown ideas and their own use
cases, that sort of more?
The sort of raw whether it'schat, gpt or cloud or any of
(18:36):
these tools use the tool itselfin addition to anything we're
creating for them.
And so helping to figure outhow to design those training
courses is important.
Ideally, involving general AIin those training courses to
sort of act as a teacher cansort of give you sort of two
birds with one stone, and sothat's something that we
encourage our clients to do issort of think through how you
(18:57):
can use this as part of thetraining, not just have it just
be a PowerPoint slide.
And so I think in terms ofuptake it sort of varies.
That it is, at least in mycurrent client.
It was offered to everyone.
So I think tens of thousands ofemployees across the world I
think almost everyone has takenthe training.
It was partly because it was amandatory training.
(19:21):
I still think time will tell interms of use that there are
still a lot of people who, Ithink, understand it but don't
think it's for them.
They're like, well, I'm anaccountant, I don't need this
tool, or that's for themarketing people to come up with
cool slogans and pictures, orI'm in ops, that's not for me.
And one area that we are stilltrying to crack is helping
(19:42):
people understand that really,no matter what your job is, this
is a useful tool, and one ofthe analogies I use is think
what you would do if you had, asthe cost of labor approaches,
zero, what would you dodifferently If you could have a
reasonably bright or, as some ofthe researchers have shown,
might even PhD bright?
(20:02):
You know analysts there whocould help you develop your
answer for free within a fewminutes.
How do you do thingsdifferently?
And thinking about that ischallenging for a lot of people
because they were trained onGoogle, right, and so if I have
a question, I write in five orsix keywords.
(20:23):
I see what I get.
If it's not on the first page,I try five or six different
keywords.
That's not a page I might giveup and say it's not there.
It's a completely differentinteraction model.
Right it's.
I'm actually going to give youquite a bit of detail, like I
would a person I'm assigningthis to Like, hey, here's what
we're trying to do, here's whatwe're trying to achieve, here's
(20:43):
how.
Here's an example of how I'vedone in the past.
Whatever that is, and if youget something back that maybe
you don't think is exactly right, telling it why, saying hey,
this isn't it, here's why itshould be different, or I want
it to look more like this, orstop that, or whatever the case
is.
So treating it as a personrather than a machine and how
you give feedback to a personrather than a machine is a huge
(21:05):
unlock.
That is just a differentapproach than most people will
take, and so that takes time.
That it's one thing to see it ina training module.
It's one thing even to see ademo it.
That it's one thing to see itin a training module.
It's one thing even to see ademo.
It's another thing to actuallydo it, and I think part of the
struggle often is, now that weare more hybrid, it's harder to
(21:26):
see the person next to you do itright, and I think that also
that assimilation is just slowerthan it was when there's more
co-location.
We're getting there, butespecially for the larger
organizations, who again arebusy and have day jobs, it is a
slower uptake than maybe wewould like.
Amith (21:43):
You know, John, I've got
two quick things coming out of
that.
And then I know Mallory has alist of things she's excited to
ask you, so I'll pass the batonback to her.
But the two quick things one isthe mandatory versus voluntary
on the training piece.
We talk about that from time totime here.
Associations tend to be veryconsensus-driven and opt-in kind
of mindset, as opposed to thoushalt do this kind of mindset In
(22:06):
the organizations you've beeninvolved with.
For the ones that have hadmandatory training versus the
ones that may not have chosen todo that, what's the difference
in uptake and AI trainingamongst their team?
John (22:23):
They take it versus they
don't.
Right.
For most of my clients, they'rebusy, right, and so if you give
them an optional training, thepeople who are most likely to
take that are people who areinterested in the topic and have
probably already learned aboutit ahead of the training, right,
and so the person you need toget to are those who don't think
it's relevant to them or findit somewhat intimidating.
Those are the people you needto get to.
The people who are willing totake it are actually probably
(22:45):
already fine.
Amith (22:46):
Yeah, and I think I
wanted to have you say all that
because our listeners need tounderstand that the same thing
applies in corporations and thisis true in government as well
that there are times when youhave to mandate things, and this
is one of those times.
It is actually theresponsibility of the leader to
ensure that their people have apath to growth, and absent
training in AI, it's clearlyobviously our opinion here that
(23:10):
you don't have a very rosyfuture.
So I think it's where that'skind of the moral imperative of
the leader to drive.
That, of course, it's importantfor the business as well.
So the second thing I wanted toask you before I hand it back
to Mallory is this and you saidsomething that I think is
actually quite profound thatpeople have to rethink what the
possibilities are.
In a way, this isn't exactlywhat you said, but to ask a
(23:31):
better question or maybe to askmany more questions, whereas
we've been trained toessentially formulate a
hypothesis and then try to testit, and that might be true in
drug discovery, it might be truein a business paradigm, it
might be true in testing amarketing campaign, but we
generally are doing kind of aserial process of coming up with
a hypothesis, testing it.
Generally speaking, we fall inlove with it while we're testing
(23:52):
it, so we bias ourselves intosaying yes, it's good, because
we don't want to be a failure.
That, of course, is endemic toall of us.
And then we move forward or wedon't.
And now I think we have theopportunity potentially to ask
10, 50, 100, 1,000 questions inparallel.
What are your thoughts on that?
John (24:07):
Totally agree, and I think
you can still even have a
hypothesis.
You can just validate it muchfaster.
Or it's even thinking throughhow you frame the question.
So I want you to be a reallystaunch critic of all new ideas
and I'm going to propose an ideato you and I want to hear your
feedback, and so that can helpme prepare for the eventual
(24:27):
feedback I get and I can sort ofiterate on that.
Or even I had this idea and Idon't even know how to think
about it.
Can you help me providestructure to it?
Or can you help me write abetter prompt to get you to give
me more structure to this, likeyou can get to my meta with it?
Or even a example that I'veseen which is somewhat
tangential.
What you said is, especially nowit has voice mode and other
(24:49):
things like that.
I have colleagues who just ontheir drive home just brained up
and just talk to it and there'snot a little structure to like
oh, I had this meeting and thisis what happened.
Oh, and I also had this idea.
It's completely stream ofconsciousness.
But then they sort of have this20-minute sort of dump of
information at the end of everyday and then it says, hey,
organize that thinking andprovide structure to it, and
(25:10):
that way I can search it and Ican come back to it.
But it's a way for them to sortof use the tool to help them
digest information and keepthemselves smarter.
That is another way.
It's not exactly where you'regetting at, but the idea of
using the tool to ask questionsand ask it to do things that
isn't like a better version oftoday, it's a completely new
thing.
That you don't do today isimportant, and so part of the
(25:34):
guidance I give my teams, andespecially those that work for
me, is, every time I ask you todo something or anyone asks you
to do something, your first 30seconds should be like okay, how
can I get Gen AI to do this forme, or at least give me a
starting point?
The value of going from a blankpage with 60% answer is enormous
, and so this isn't to say youshould write it in and just send
(25:58):
whatever it comes out to youdirectly, not look at it, but it
is the way.
How do you get the 50, 60, 70answer, and either iterate with
the tool or just take the penand go from there.
Um, but one start with startwith that.
And two, if you don't know howto use the tool, ask it.
And this is, I think, can be anunlock, especially for new
people like, hey, I don't knowhow to write a prompt, I ask it.
And this is, I think, can be anunlock, especially for new
people like, hey, I don't knowhow to write a prompt.
(26:19):
I'm trying to do this thing.
Can you help me write a promptto help you understand what I
want?
It's kind of meta, it feels alittle bit like
three-dimensional chess, butactually be a really useful tool
that even I use sometimes.
Have there been times where I'mlike I'm just not getting what
I want and I'll respond like,hey, look, you're not giving me
what I want.
Can you help me phrase this soyou better understand what I'm
trying to get at?
(26:39):
I'm trying to accomplish thesethings and it gives me a prompt
that I can get feedback to itand it sort of feels like a
cheat code, but it can be reallyeffective and sort of that
trial and error and how do Irethink how to use this tool is,
you know, more than half thebattle.
Just start using it and testingand experimenting and you'll
find where you're comfortableand where you have more versus
(27:00):
less success along the way.
Amith (27:02):
Building on that for just
a second, one of the cheat
codes related to thatmeta-prompting approach that
I've used quite effectively is Igo to something like a chat,
gpt or a cloud and I say, hey,listen, I need to go work with
another AI that's not quite assophisticated as you are, and so
I'd really love your help increating a prompt that a lesser
AI would be able to understandand complete the following task,
(27:22):
and it's like kind of like youknow, it's just like all of us.
Oh yeah, I'm a superior AI, sothe AI responds really well to
flattery.
I just got to say that.
John (27:32):
It is funny how it does
that, that even things like hey,
this is important to me in mycareer, please take your time,
take a breath and think throughit before you go.
It feels weird to write that topotentially a probabilistic
engine, but it actually can bequite effective.
So, yeah, I still say pleaseand thank you when I write
(27:55):
prompts.
It just feels natural.
Again, it also helps me get inthe mindset of the same way I'd
ask a person I'm acting thismachine the same way, because
that is a closer paradigm thanmaybe we've had historically.
Amith (28:06):
Well, and you and I are
both also hedging for the likely
future that we're heading intoor we're not in charge of this
planet.
John (28:12):
Exactly.
We'll see what happens Nice.
Mallory (28:18):
Hey, now we're not
alarmist on the sidecar sync.
Well, I think we're all inagreement here that training is
kind of one of those essentialfirst steps.
Education that's really thewhole reason why sidecar exists.
We do have a lot of associationleaders that I've had
conversations with that thinkokay, we're rolling out this AI
training, but we're strugglingwith what we do next.
We've made this investment.
(28:39):
How do we justify thisinvestment to our board?
But also, when talking aboutemployee productivity, how do we
measure everything that we'vetrained on?
So, when you're working withclients who particularly want to
improve employee productivity,what do you see as that next
phase after education ortraining?
John (28:59):
It's a problem a lot of
companies are dealing with right
now.
This is cool, but now what?
Show me why this makes sense,other than cool chatbots?
And I think there are two waysI think about it.
One this gets back to the pointI made toward the beginning of
it's important to have that boldambition you're headed towards,
because that's ultimately I'mimproving employee productivity
(29:21):
to accomplish X.
So being focused on the endsrather than the means is
important, because then you canalso not only show financial
outcomes, but you can also showhow you're progressing towards
whatever the bigger, bolderthing that you're trying to
accomplish is.
So that's one thing I think.
Second is being really clearabout what you're trying to
measure and why.
(29:42):
A lot of times they say we'regoing to improve employee
productivity and the implicitassumption is if we do that,
we'll save money.
It's sort of like not really.
Unless you're removingheadcount, there's really no
saving there.
Employees are slightly moreproductive, but there's probably
not a clear financial savings.
There might be an indirectfinancial savings of like
they're generating betteranswers, our customers are
(30:05):
happier, so we have higherretention, but that's a harder
connection to tie, and so Ithink it's important to
understand, if our goal is toachieve savings, that must mean
we are going to reduce headcount, reduce tech spend, reduce
something, and Gen AI will helpus get there.
So that's key.
It's also, I think, perfectlyreasonable to have more
(30:25):
qualitative measures of we'regoing to have higher employee
retention, we're going to havehigher NPS, we're going to have
higher, you know, more visits,more traffic, whatever.
It is Just be clear about that.
More visits, more traffic,whatever.
It is just be clear about that.
I also find, at least with theclients I work with, that
savings comes more fromnecessity than planning.
What I mean by that is, if youwant to save 20%, for example,
(30:52):
it's a lot harder to say, okay,go find me 20% using Gen AI,
than it is to say I already tookthe 20% away, use Gen AI to
make it work right.
And so that necessity there of,oh, actually, we don't have
these people anymore, so we haveto find a way to use it.
And here's a way we can saveeveryone two hours a day and
that will fill the gap.
(31:13):
That I see work quite well, eventhough it can be somewhat
stressful.
Fill the gap.
That I see work quite well,even though it can be somewhat
stressful.
It's just much harder, just inhuman nature, to be like, yeah,
we could do this and it couldsave us time, but I don't know,
I'm not sure, and that's justharder.
There needs to be some forcingfunction there, I think, to
really see at least cost savings.
Again, this is separate fromgrowth saving or growth
(31:34):
generation or other sort ofmetrics, but on the cost side
that's what I see with myclients.
Amith (31:40):
You know, building on
Mallory's question of where to
get started, a lot of people Italk to say, hey, we really want
to try to find a way to takesome of the load off of our
customer service or memberservice team, which tends to be
overwhelmed, and also improvethe quality of customer service
at the same time.
And there's actually thisconcern that comes up saying,
hey, I've got X number of peoplethat answer emails and phone
(32:00):
calls all day, and what happensto them if we automate it?
And I do think that, absent thedesire and reason to cut
headcount, you're not going tohave a financial savings from it
, but you potentially can createfar more value.
Part of it is the people who dothat work oftentimes are far
underutilized, not necessarilyin terms of their times, but in
terms of their knowledge.
So they're answering the samequestion over and over and over,
(32:21):
the same case over and over andover.
They are probably trained onlots of other things that are
perhaps more interesting to themintellectually, but also
potentially higher value.
And so by using AI to automatesome of the lower level tasks
and, for example, the customerservice workflow, you do some
interesting things there interms of employee morale and
retention, lower training cycles.
(32:42):
The other thing too is I thinkthat particular use case might
be worth digging into a littlebit with you, because I look at
it as kind of a dual benefittype of scenario, in that it can
make the internal ops moreefficient.
But if you automate customerservice really well in this way,
the way we're doing it now, itactually can increase the
quality of customer service,which is pretty remarkable,
because if you think about theentire history of customer
(33:04):
service tech up until recently,it's only been on the one side.
It's about cost savings.
Nobody thinks phone trees andthe kind of bots that were on
screens before were enjoyable.
Right, it's like everyone youget on the phone with your
airline, you go agent, agent,agent immediately right, because
you want to get past thetechnology.
But that's not the case anymore.
You know, like one of the casestudies we talk about a lot in
(33:25):
this pod it's a little bit oldnow, it's about 12 months old,
which in AI timelines is ancient.
But it's this Klarna case studythat you've probably heard of,
john, where they, you know, putan AI agent in their workflow
and the most notable outcomefrom that was they went from an
11-minute resolution time to atwo-minute resolution time.
I don't know too many peoplewho'd prefer to be on a customer
service request for 11 minutesinstead of two.
(33:46):
Right, and so when somethingbecomes better, cheaper and more
available, generally, peoplecan see more of it.
So perhaps this could actuallylead to an increase in customer
service requests, which mostorganizations would view that as
a net negative right, themetrics typically are not
positive.
You're not rated well if youhave more customer service
inquiries.
People think your product sucksor there's some problem with it
(34:07):
.
But what if we flip that scriptand said, hey, our customer
service is so great and oh, bythe way, we can scale it to
infinity.
People want to talk to us allthe time.
Wouldn't that be awesome?
John (34:17):
Yeah, and educate how we
can use a tool differently or
upsell opportunities, yeah, andI totally agree with you.
I mean customer service, Ithink, is not a pleasant
experience for either partyright now.
It's no fun to be on hold for10 minutes and no fun to be
yelled at by every customerknowing they've been on hold for
10 minutes.
And so one thing I saw with oneof my clients which I thought
(34:38):
was interesting was we had asimilar.
They started using AI forcustomer service and contact
centers, first to just informthe agent and giving them sort
of a knowledge base to searchand help answer questions and,
in some cases, starting toanswer some of the calls.
But in their pilot, what theyfound is actually the average
(34:59):
call time to your point wentdown, but the call time with the
human operators actually wentup.
And like what's going on here?
This is a disaster, our peopleare somehow getting worse.
But when they actually looked inand started listening to the
calls, they're actually havingmuch richer, deeper connections
with the people who called in.
Because the easy stuff was goingto the bots right, and you
change my address, my order'slost, whatever it is.
(35:20):
Instead, it was like, hey, I'mhaving this problem, can you
help me figure it out?
And it was much more of a yes,let me figure it out, let's work
together, let's brainstorm, andyou build a much deeper
connection there to your point,which is great for the customer,
who had a good experience, andme, like hey, someone that spent
20 minutes helping me solve aproblem.
It's also a much betterexperience for the operator
who's talking to them, becausethey feel a lot more purpose.
(35:40):
They're helping someone.
They have much more enjoyableexperience than let me go track
your order or whatever the casemay be, and so I think you're
right that there's.
You may need fewer peoplebecause, depending on the
proportion of calls that aresort of easy versus more
challenging, but for the peoplewho are staying, I think it's a
much richer and betterexperience for both sides of the
conversation.
Amith (36:01):
So if I was an
association CEO based on that,
what I would consider doing isputting in an AI, an agent, to
handle a lot of the routinecustomer service, member service
inquiries.
But then I'd have my teamwhether it's five people or 50
people that do member service.
I'd have them actuallyproactively reach out to members
(36:22):
and try to offer value rightand try to find ways to be
proactively able to serve.
And I think there's lots ofcreative ways people could do
that.
Just have conversations, justsay, hey, I'm checking in.
I wanted to talk to you aboutyour experience with this and
provide value, not just like asurvey People don't like those
phone calls but something that'struly value additive.
And, of course, if you use someAI to drive personalization and
have a better idea of what Johnor Mallory or Amith might be
interested in, that can behelpful too.
(36:43):
But those are things you can'teven begin to think of right now
because we're so overwhelmed bythe workload we have, and this
is the kind of creative outlet Ithink that you refer to as
creative destruction.
We destroy the traditionalworkflow and reduce that purely
to automation, but leave roomfor this incredibly creative
outlet.
John (37:01):
Yeah, and there's a middle
ground also of what they were
trying to solve is the person,whoever was calling, would
describe their problem, and itwasn't like pick one of seven
options and I'm going to ask youto repeat yourself, use
whatever words you want, as muchtime as you want, and the AI
would try to synthesize that andit could then use logic of like
this is easy, for this istrickier, and the trickier ones
(37:22):
is the ones they bring thepeople in, and even when they
brought the people, the peoplewere not there on their own.
They also had the AI, usingWhisper and any of the other
tools listening in, and so ontheir screen it could both track
it.
It could say here's a answer totheir question or here is,
depending on how commercial youwant.
Here's an opportunity to upsellor encourage a new event or
(37:44):
product that maybe they weren'tconsidering earlier, and it
could also give coaching of.
They seem frustrated.
Here's something to say to helpcalm them down.
You can think through ways ofarming your people to have
better tools to have betterconversations, in addition to
automating and great degree ofit.
Amith (38:02):
I want to unpack that for
just a minute, because not all
of our listeners are familiarwith Whisper, but Whisper is
actually an open source modelfrom OpenAI.
It's one of the few open sourcethings that they published
that's inferenced on a wholevariety of clouds, including
Grok and AWS and Azure, andprovides essentially real-time
audio to text translation, whichcan then be fed into any kind
of other models.
(38:22):
So what John's describing isthis workflow where, in real
time, the AI is not onlylistening, transcribing what the
speaker on the other end issaying, but potentially being
able to suggest answers and waysto improve the call, and that's
, I think, a tremendous use casefor that technology.
That's a great example of aco-pilot scenario versus an
autopilot scenario.
John (38:42):
Yeah, and then use it on
the back end to provide feedback
to your team and help them getbetter of like, hey, here's
something where maybe the calldidn't go so well.
Here's some things you can tryin the future, so you sort of
have this continuous improvementelement as well to give you
better visibility into who'sdoing well and why.
Or maybe an agent doessomething, not an agent a human
(39:03):
agent does something that youwouldn't have expected, that
actually worked really well, andhow do you then incorporate
that into your operatingprocedure?
So, yeah, there's things youcan do to avoid the call, the
things you can do to improve thecall and the things you can do
after the call, using thetechnology to all make it a
better experience for all sides.
Mallory (39:20):
In that example, did
the consumer think they were
talking to a human or were theyaware more or less?
You think that they weretalking to an agent.
John (39:29):
It's a good question,
Mallory.
I suspect and this was maybeEight, nine months ago, so I
suspect they probably knew theywere talking to a bot at the
time.
The tech's gotten better so youcould probably hide it In this
particular pilot.
The company actually wanted tomake sure it was clear, so they
told you at the beginning thatyou were talking to a bot.
(39:50):
They weren't trying to trickyou, and it also made it clear
when you were talking to a human.
And so a lot of what they'retrying to do is figure out how
do we sort of triage these callsappropriately and sit it down
the right path.
But you could go down a pathwhere you are trying to make it
hard to tell the difference.
You have to make a call on howcomfortable you are with that
(40:12):
and how comfortable yourcustomers would be with that.
But it is the tech's gettinggood enough now that it can be
hard to distinguish.
Mallory (40:18):
Mm-hmm, I was thinking
of my own personal examples,
where I don't even do phonetrees anymore, I just say
customer service rep, over andover and over again Smash, zero
see what happens.
Yeah, no, I'm not even playingthat game anymore, so I'm going
to have to kind of reevaluatethat when things change.
But I think that's a reallyneat use case that you shared.
I'm curious if you have anyother Gen AI use cases that you
(40:42):
are particularly proud of orthat you always go back to in
terms of how successful theywere.
John (40:47):
There are lots that come
to mind.
It's also, I think, unlike someother tech.
It's just far more expansive.
A use case is really whateveryou can think of that might work
or might be a problem duringthe day.
One that's fairly simple but Ithink quite powerful is just
better knowledge managementwithin your association, within
(41:09):
whatever organization you're in.
That especially, I wouldimagine for a lot of nonprofit
organizations, especially ifthere's volunteer labor, there's
a lot of sort of tribalknowledge of well, mallory just
knows this, because Mallory'sbeen here for 20 years and she's
always known this and that canmake transitions quite
challenging.
And one thing I did with one ofmy clients is they did tons of
(41:32):
consumer research of hey in thiscountry for this product, for
this situation, for thisdemographic, whatever it was,
and they spent lots of money onthis.
And unless you were directlyinvolved in that project, you
probably had no idea it happened.
And they were spending millionsand millions of dollars on this
.
And so the idea of being ableto like, hey, what if we take
all this you know hundred slide,powerpoint decks and PDFs and
(41:54):
throw it into this tool and thenanyone can just query it, like
I'm launching this new productin this country and I'm
targeting this demographic whatshould I be aware of?
And it can then pull from allthat and provide the sources if
you want to go deeper, but canjust give you a quick answer,
and that applies to really anyorganization Every organization
has a ton of knowledge.
Remarkably little of it iscodified.
(42:15):
And even if it is codified, ifit is codified, if it's on slide
74 of an onboarding deck, noone's going to see it, and if
they do see it, they're going toforget it 10 minutes later.
And so this idea of having sortof the knowledge of the company
accessible we sort of have thisnow on the internet.
We take it for granted of likeessentially, knowledge of
humanity is accessible.
You could do that for yourcompany as well, your
(42:36):
organization, and so I thinkthat's a really good, relatively
easy use case to start with, toget you comfortable and
everyone can start using andunderstanding it.
You can think very small oflike maybe we just put like our
HR policies on there.
So if someone's like hey, do wehave dental coverage?
Or what's our vacation policy,I can answer questions like that
Too much more expansive of likefor my clients hey, I want to
(42:59):
write a new brief for a newcampaign, base it on everything
all our research we have, andgive me a draft campaign that I
can edit and then share with anagency so you can think of it.
There's a wide spectrum ofapplications, but that's one
that I think is sort of a easyone.
That's a good one to getstarted on.
Mallory (43:18):
I think what's
interesting there too, we talk
about this a lot is the idea ofknowledge management internally,
but then for associations aswell, the idea of knowledge
management for their members,because they're huge
repositories of, like some ofthe most authoritative content
in their profession orrespective spaces.
So being able to synthesizethat with an AI and then have
their members interact with thatAI to get that information and
(43:39):
that exists.
We talk about it a lot.
It's called Betty Bot, but Ithink that's an interesting use
case for sure for associations.
Amith (43:46):
Yeah, you know.
Another related element to thisis kind of the untapped
knowledge from unstructured datawhich John's referring to as
well as part of what he justdescribed, and I think it's just
an opportunity to zoom in andsay, well, when you think about
knowledge management, part ofthat is understanding who knows
what, as he pointed out, andthat is, by itself, part of the
tribal knowledge.
It's not just the fact thatMallory knows X and has known
(44:07):
that for 20 years, but the factthat I know that Mallory knows X
is also tribal knowledge.
Yet that information actuallydoes exist in digital form.
It's probably reflectedsomewhere in your emails, it's
probably reflected somewhere inyour SharePoint documents, it's
probably reflected in your teamsor Slack conversations by
virtue of the back and forth andthe types of things people have
(44:28):
said, and you own all thatinformation and so an AI that
would be able to continuouslyscour all these unstructured
sources with the purpose ofessentially being able to index
and catalog what people areknowledgeable about and being
able to connect people based onwhat they're working on.
Another related thing is likeJohn and Mallory both work at
large company X and they're bothworking on basically the same
(44:49):
project, but have no idea thatthey're both working on the same
project.
This happens all the time.
It in fact even happens atassociations with 100 employees
or less.
How do you detect that?
How do you know that?
How do you connect these peopleso they can collaborate, not
only reduce potential forredundancy, but also just make
it better?
Right, and these are things thatI am confident, like every
workforce tool, workflow tool,will have this.
(45:12):
This will be built intoMicrosoft 365.
It'll be built into Google.
It might take a few years, butthere's also opportunities for a
lot of third-party apps.
For now, we actually havesomething very similar to what I
just described on the drawingboard as a product that we're
thinking about building.
But and I think that there'sjust this is crazy right,
because the idea that,historically, you would try to
maintain that kind of thing,you'd have like a database of
(45:32):
all the skills, of all of dothat, but it gets out of date by
the minute you know, the momentyou put it together, it's so
hard to get that up to date.
So I think this is such anamazing opportunity to kind of
extract those structuredinsights from this mountain of
unstructured information we have.
John (45:51):
Absolutely, and I'm sure
you all covered this in the past
.
But I know some area that someof my clients get nervous about
are like, hey, but wait, isn'tall our day going to go train
the public models and it's allgoing to leak.
Now all clients have theseenterprise instances that
nothing leaves their instancesor domain and so this is all
protected and safe and secure.
(46:12):
The models will still sometimesgive wrong information, and so
there are still hallucinations.
Other issues you need to workon it's not bulletproof wrong
information, and so there'sstill hallucinations.
Other issues you need to workon is not bulletproof, but it is
not the issue where I thinkthere was a lot of concern a
year or two ago of I don't wantto give it all my information
because that's going to then bepublic domain.
Mallory (46:28):
When I asked you, John,
about successful use cases, you
said there are so many to pullfrom.
We talked about knowledgemanagement, we talked about
customer service.
You saying there's so many topull from makes me think you
were one very good at your job.
But I'm curious if you were inAI and you could kind of
extrapolate across all the AIimplementations, rollouts that
you've done with clients, do youfeel like there are any
(46:50):
patterns, any threads that youhave noticed in terms of clients
that are really successful andmaybe clients that are less
successful, that you could kindof generalize and share with our
audience?
John (47:00):
Yeah, absolutely so.
This is not dissimilar from anyprojects in that the ones that
are more successful have reallyclear definitions of what
success is, and so it's reallyeasy.
But this is really cool.
Let's build something, and thenyou build it and someone's like
well, what does it do?
Like, what does this one thingLike, do we want that?
One thing I don't know, and sohaving the idea of like maybe
(47:21):
say another way, is I don'tadvise my clients to have a Gen
AI strategy.
I advise them to have astrategy and think how Gen AI
can enable that, that theyshould not be separate things
and so similar here that this isless of let me go think of a
bunch of Gen AI use cases andmore of how can I go improve my
business or improve myorganization, and then, as part
of that, I should think how GenAI can help accelerate that or
(47:43):
make something possible thatwasn't possible before, and so
that's where I see more successis we're trying to accomplish a
business goal or businessoutcome, and Gen AI enables that
, rather than we're trying tobuild this chatbot, which is a
solution in search of a problem.
Mallory (48:00):
I'm curious what you
think about that, Amit, in terms
of gen AI fueling the strategy,because I feel like what we
have often talked about on thepod is sometimes these
strategies for associations canbe so set in stone and they can
be to improve or replace somelegacy system that might not
drive that much change for themin the end.
So what are your thoughts there?
Amith (48:19):
Well, I mean, a lot of
people come to me and say, hey,
our strategic plan that we setin 2021, you know, says X, y and
Z and I'm like, well, that'scool.
I hope you like, you know, it'squality content for your museum
maybe, but it's no longerrelevant at all Because the
reality of it is the strategy isinformed by the possibilities
of the world that you live in.
And when the world radicallychanges and the opportunities
(48:39):
are radically different, thecompetition's different, the
risks are different, thestrategic framework has to shift
.
I mean, for one thing, you haveto be a lot more nimble than a
five-year strategic plan, like Idon't look further out than two
years.
I have ideas of what mighthappen in five or 10, but I
don't know anybody any betterthan anyone else really.
But next two years, acontinuous, rolling shift in
(49:01):
terms of what we're going to goafter, based on the rapidly
changing environment.
But you cannot inform strategyeffectively if the environmental
factors external to yourorganization have changed even a
small amount, but certainlyradically like this.
So strategy has to be informedby those opportunities and those
risks.
Otherwise your strategy is,like I said, basically a
historical artifact andirrelevant.
John (49:21):
And that's fair.
I mean, I think I probablyshould preface with assuming you
have a up-to-date and effectivestrategy.
Yeah for sure.
My push is you should have abusiness outcome, should be the
driver Agreed, not a technologyfor the sake of it and so-.
Amith (49:35):
Yeah, I knew what you
were saying with that and I
think a lot of times peoplethink, okay, well, our strategy,
our business outcome circa 2021was these three things and we
set that was our five-year goalset.
Well, sometimes you have tothrow that out and reset it.
But yeah, no, it totally makessense.
Having clarity and outcomes, Ithink, is one of the most
critical lessons.
I think the association sectorneeds to zoom in on and say,
(49:57):
well, what should those outcomesbe and why do they matter?
And are they quantitative?
Are they binary?
Did we achieve them or not?
Are they measurable to beginwith?
Are they time bound?
Are they things that actuallywill achieve the general
business outcome if we have themeasurable outcome we think
we're going to get?
And a lot of times there'smurkiness around that.
That's true in corporations too, but in associations, I find
that a lot of the so-calledobjectives, I don't have any way
(50:19):
to know whether or not I'veachieved them.
After the fact, they just liketo say, hey, we want to have
improved member service.
Okay, what does that mean?
So you have to really refine itthe way I think you were
alluding to.
It's really critical.
John (50:30):
Yeah, absolutely.
And it gets back to our pointearlier about hey, how do I see
ROI?
How do I see impact?
You have to know where you'regoing before because to your
point, otherwise if there isn'ta clear yes or no answer at the
end, then it's very hard to knowif you're successful.
Amith (50:45):
Well, mallory, earlier
you just asked John to kind of
articulate if he was an AI,which is kind of a fun, you know
, creative exercise for all ofus and for me.
I've actually been accused morethan once about that.
I think that's kind of funny.
My kids sometimes call me a bot, which I thought was a
compliment at first.
Mallory (51:04):
I don't think so.
Amith (51:04):
I mean I looked it up on,
I think it was like Urban
Dictionary.
I'm like oh wait, that's notpositive, but I thought it was
like high praise, you know.
Mallory (51:13):
Amit wants to be a bot,
it could be high praise.
I think you could take it thatway.
Oh, I think that's how it wasmeant.
Well, I know we're almost outof time At the top of the pod,
john, you were talking about theidea of imagining what's
possible, which I think is areally creative way to think
about Gen AI in particular, andkind of, if you had this PhD
(51:34):
level assistant at all times,what would you do and what would
you do with your time as ahuman?
So I'm curious if you weredoing that same exercise right
now, are there anything, isthere anything near term in
terms of use cases that's notquite possible yet, but that
you're really excited for in thenext year or so to work on with
your clients?
John (51:53):
The big buzzword in
general is agentic or having
these sort of agents where yougive goals to rather than rather
than tasks, and I think thatthat's a pretty exciting concept
and so open ai demoed a toolcalled operator recently which
essentially just uses yourkeyboard and mouse to just go
(52:14):
through things and so say, hey,go buy me tickets, go book me
dinner, and it just googles andit goes through like you would
if you're a person, and so you,you can think does that mean
we're going to reduce APIs?
There's lots of implicationsthere.
But where I find the agentsexciting, or the opportunity
exciting, is how it can own sortof an end-to-end process.
So I'm going to use aprocurement example, because I
(52:37):
think that's most tangible andmaybe you can help me translate
this specifically toassociations.
But imagine you're at a companylike, hey, I need a service.
We don't do that internally.
I need to go get a partner tohelp me with this, and so I need
to write an RFP, and that'skind of a chore.
So instead, let's say I had thissort of vendor management agent
(53:00):
and I can say, hey, here's mysituation, I need a vendor that
can do X, y and Z, help me writethe RFP and it's going to write
, it's going to ask youclarifying questions, you can
answer it and you'll sort ofwrite it together and you're
like, ok, cool, now I have anRFP Maybe I on a thousand RFP
examples we have in the past,whatever the case is, and then I
can send that out.
Now I get responses for thisRFP and they're like great, now
(53:22):
I have a bunch of 50 slidePowerPoint decks I have to
review.
Instead, I'm going to throw allthose into the same agent which
built the RFP with me and say,hey, help me grade these.
And while you're grading it,also look to see if we've worked
with any of these companiesbefore and if we had a good
experience or a bad experienceto help me understand how I
should think about which ofthese vendors is the best bet.
So then it helps you actuallyselect the vendor and do it in a
(53:45):
better job than most humanswould, or at least enhance most
humans would, because it'sactually going to get into the
details.
Where I think it really getsexciting is then you can have it
maintained going forward.
And so every time I get aninvoice, hey, make sure that
this invoice looks right, basedon our agreement.
Am I getting all my rightdiscounts?
Am I getting the right rebates?
Is it time to renegotiate?
Because things have changed andyou almost have this tool, this
(54:07):
agent that exists indefinitelyand you can do this as one sort
of vendor.
You can do this with 100vendors and it allows you to
sort of own this process, end toend, and then restart the
process, renegotiate whatever.
That I think can be prettyexciting.
That's a procurement example,but you can think of them in
finance.
You can think of them in termsof member retention, you can
think in terms of employeeonboarding.
(54:27):
There are lots of really coolexamples of if the tool could
understand a goal rather than atask, that it unlocks a lot more
opportunity.
Amith (54:36):
John, that's an excellent
example and first of all, for
all of our association friendsas well as the vendor community
that are listening, the term RFPis definitely a well-known term
.
Associations do issue quite afew of them, not at the scale of
a corporation that's running aprocurement process where
they're repetitively buyingcertain kinds of products or
services, or at that scale wherethat level of automation may be
(54:57):
justifiable.
They may be doing 10 RFPs ayear or 50 RFPs a year for a
large organization, but it wouldstill be valuable to do that
because you could still use someof the steps, maybe just not in
a full auto way where, as youdescribed, where the agent's
actually emailing vendors andall that kind of stuff.
But maybe the idea would bethat it helps you write the RFP,
helps you evaluate theresponses and helps you catch
things that you may not haveotherwise have done.
(55:18):
Take in five or 10 responsesand to normalize them into a
spreadsheet to help it make youmake a comparison easier and all
those steps and things.
I think what you said is firstof all, pretty non-relevant for
this community, so that'sappreciated.
I would translate it also toanother business process that's
totally different but at thesame time similar, which is the
process associations go throughto produce content and for many
(55:41):
organizations that involves thisidea that's RFP-like.
It's called a call for speakersor a call for papers, where an
association will say to theircommunity hey, we're going to
have this event coming up inJune and we're opening up a call
for speakers and we'd likepeople to submit proposals, and
there's typically some kind ofrubric that says hey, to submit,
you have to provide this amountof content.
(56:03):
It has to be on one of thesesubjects.
You have to have this kind ofexperience.
Maybe you need a co-presenteror not.
Maybe you have to be apublished author if it's an
academic institution.
That oftentimes is the case onand on and on right.
And there's this whole processwhere they're intaking all these
proposals that are coming in tospeak and they have to grade
them and they have to providefeedback and they have to kind
of filter it down to a narrowerset.
(56:23):
Very, very similar in a lot ofrespects.
That process is a perfectcandidate for an agent that
would do a great job, bothstreamlining the efficiency the
way you described ultimatelyproducing better content content
, because one of the problemsthat associations have in the
workflow I'm describing is theyhave a lot of kind of biases,
just like we all do, like thepeople who are in the committee
(56:44):
that's typically making theseselection decisions.
They've looked at a lot ofthese same names in the past and
they're like oh, I know thisperson, they're pretty good.
I know John, maybe I don't wantto include him.
So you have those kinds ofscenarios going on and the AI, I
think, is going to be a littlebit more thorough, perhaps a
little bit more objective than alot of us would be, and then
ultimately perhaps produce morenovel content and produce more
(57:08):
interesting content in areas theassociation sometimes has a
hard time doing.
And also, the other thing is isone of the main ways that you
get rid of newer presenters orspeakers who are submitting to
you is you ghost them.
And what a lot of associationsdo is they'll take months to
respond to.
People say it's kind of likeyou're getting a ruling from the
king, like saying, oh, wehereby deem you worthy to come
and speak at our event.
It's like you get a scroll inthe mail.
(57:30):
That's how it feels.
It's that slow, whereas it wouldbe great if, first of all, if I
screwed up, if I submittedsomething incomplete.
I get an immediate responsesaying, hey, you know you're not
too bright, are you?
You should really probablyinclude your resume or whatever
or something.
That's not the best emailtemplate, but you know something
like that where I'm like, ohdamn, I forgot to include that,
let me upload that and have ashot at being considered.
(57:51):
People would like that, as wellas faster feedback at all the
stages, including maybe somequalitative feedback, saying,
hey, your proposal was toosimilar to many others we got,
and in the future maybe here'ssome other topics you might
consider that would improve,like in all angles, like the
customer service example,everyone's experience.
So it's kind of translating asimilar workflow to what you're
(58:14):
describing into something Ithink a lot of our folks would
really find a lot of value in toboth improve their products and
decrease their pain.
John (58:21):
Yeah, and even feedback in
real time.
So not just you weren'tselected and here's why, but
this isn't there yet, but wereally like this idea.
Could you blow that up and domore there?
Amith (58:31):
and recent it Totally and
a lot of speakers are like,
yeah, totally, I'd love topresent on.
That Sounds super interesting.
John (58:37):
Exactly so.
It doesn't just have to beafter the fact, it can be
real-time improvement as well.
Mallory (58:44):
John, thank you so much
for joining us today on the
Sidecar Sync podcast.
I think you've shared tons ofinsights and stories that will
be incredibly beneficial for ourassociation listeners, so thank
you for joining us.
John (58:55):
Thank you both.
It was a fun conversation.
Amith (59:08):
Thanks for tuning into
Sidecar Sync this week, looking
to dive deeper.
Thank you both.
It was a fun conversationjourney with AI and remember
Sidecar is here with moreresources, from webinars to boot
camps, to help you stay aheadin the association world.
We'll catch you in the nextepisode.
Until then, keep learning, keepgrowing and keep disrupting.