Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Tom Mueller (00:06):
Hi everyone.
Welcome back to the Leading ina Crisis podcast.
On this podcast, we talk withcrisis leaders around the world
and we share lessons learned andstories from the front lines of
crisis management.
I'm Tom Mueller.
With me today is a specialguest, philippe Boremans.
Philippe is joining us fromPortugal today and he is well.
(00:31):
Philippe.
Tell us a little bit aboutyourself.
You have such a wide and variedhistory.
How do you introduce yourselfto people these days?
Philippe Borremans (00:42):
Hi Tom,
thanks for having me on the show
.
So, um, yeah, my, my backgroundis in public relations, so I
studied that at school very longtime ago and then um started my
pr career in a in an agency,portugal valley in brussels, in
the brussels bubble.
Uh, originally I'm from belgiumand I spent 10 years with ibm
(01:02):
as a public relations managerand then moved out as a
consultant.
So pretty early on I loved agood crisis from time to time,
and so that's my specialty.
Now it's crisis, emergency andrisk communication, and I'm
lucky enough to have clients onboth the private sector side and
(01:22):
the public and UN agency side,so I sometimes work on product
recall and then sometimes I workon epidemics and nasty things
like that.
Tom Mueller (01:34):
Well, you bring a
really wide variety of
experience into this and latelyI know you've had a focus on
artificial intelligence andtrying to make sense of that For
those of us who are just sortof awakening to the fact that
there's this AI tool out there.
(01:54):
I've been very interested toread a lot of your writings
around AI and, for those of youwho may not know, philippe does
a regular newsletter called Wagthe Dog where he dials in and
talks about issues with AI andcrisis communications and that
(02:15):
in the newsletter and it'susually an amalgam of a variety
of different sources ofinformation with his own unique
spin on it.
So it's very interesting readsI'm finding, philippe.
Thank you, hey, let's just talkAI.
I'd like to dive into this,since it's such a hot and
emerging issue for everybodyreally.
(02:37):
But when you think about crisismanagement incidents right, and
for most of our listeners theymay be in a corporate
communications job, such as youwere with IBM, or an agency job
like Porter Novelli.
I'm figuring out.
You know what's theapplications for AI in dealing
with a major incident and I knowyou've got thoughts on that.
(03:01):
So if you were going to counsela client today on how to use AI
in an event that happened thisafternoon.
What would you tell them?
Philippe Borremans (03:11):
Well,
there's many different ways of
applying AI, and I think first,we need to understand as well
what do we mean with AI, becausethere's different levels and
formats of AI.
When I usually speak tocommunicators crisis
communicators, emergencycommunicators we think about
generative AI.
That's the chat, gpt, that youhave, the cloud and what have
(03:34):
you, and that is, of course, thethings that we, let's say, at
least played around with,because it's online, it's free
or paid, but it's not expensive,or it gets integrated into our
working environment.
If you work in a Microsoftenvironment, you have, as well,
their own AI integrated in allyour tools.
So it's becoming, let's say, aday-to-day thing which is very
(03:58):
much accessible.
But then you have other AIsystems which have been there
for a long time.
But then you have other AIsystems which have been there
for a long time.
All our media monitoring andsocial media monitoring tools
have been using one or the otherform of AI in the backend for a
long time.
Crisis management systems, allthe way down to operations and
logistics, have one or the otherform of AI or at least
(04:20):
automation in there.
So it's a big field.
But if we look at generative ai,the things that we can put our
hands on, uh, very low scale andvery, very quickly into our
workflows, then what I see isthat many people immediately
think about content creation andand why not?
It's a good thing, why not?
These things write pretty well.
(04:41):
You can train them to writevery well, as long as you know
how to talk to them.
That is, then, the promptingtechniques that you need to know
.
So that's fine, but we can gohigher up.
I'm, for instance, using my ownlittle AI system to create first
drafts of rather complicatedcrisis communication plans.
(05:02):
Or when a client is asking mewell, philip, could you create a
?
So I do a lot of crisissimulation, scenario design and
then rolling them out.
Well, there again, I've gotanother AI help sidekick, as I
call them, to help me againfirst draft.
And so I think, for crisiscommunicators, emergency
(05:25):
communicators, riskcommunicators in the context of
emergencies and crises, thebiggest added value of these
systems is, of course, when youknow how to use them.
Is the gain of time, singleresource that we never have
(05:46):
enough of and, uh, of course,using them ethically and with
transparency and, of course,based on on good information
that you're working with in asecure environment.
Tom Mueller (05:51):
but the time saving
aspect is is incredibly
powerful in the context ofcrisis so let's talk about the,
the plan drafting, for a momentum so we had a guest on a few
episodes back, dan Smiley, whodoes the Tactics Meeting podcast
, based out on the west coast ofthe US, and he was talking
(06:20):
about how he needed to developan AI agent and have it spit out
a rough initial response planto deal with this.
And he said you know it wasn'ta final plan but it gave me a
great head start to get thisdone.
Companies or municipalitiesthat might want to tap into that
(06:45):
.
Is that as simple as going to achat GPT and asking for help?
Philippe Borremans (06:54):
How does
somebody get started with that?
Well, I think, if you've neverhad this interaction with
generative AI, I think thatwould be a good idea, taking
into account, of course, acouple of rules that you need to
take into account.
You will never feed thesesystems confidential information
and, of course, you'll be verytransparent in the use of those
(07:14):
tools.
In the EU it's a law thanks toour EU AI Act, so, but to test
out the waters, definitely go inthere and then maybe have a bit
of, take a couple of notcourses, but online.
You have very smart people youknow explaining you how to
prompt, how to create reallygood instructions as well, and
(07:38):
then you can play around.
So you can see a first feedback, first output of these systems,
and I think most people whohave never done this in a
structured, good way will bevery much impressed.
Now, that's good to get a feelfor it and to get an idea
starting to get an idea of whatthe power of AI could be.
Once you get serious about it,then, of course, I advise all my
(08:01):
clients that they need to lookfor a large language model, the
system, the AI system backend,on which generative AI systems
or chat systems are set up, findthe one that is suitable for
them and then, of course,implement that on their own
servers behind their ownfirewall, with their own IT
(08:25):
specialists.
You want your own system.
It is not, let's say, anincredible huge project.
There are very good open sourceLLM systems that can be
instructed, can make thempurpose-built.
You can then train them on yourown information, which I assume
(08:46):
you will think is the correctinformation that you want to
work with.
So the hallucination part isgetting limited.
You can instruct the systemwhat to do exactly and then I
think and I know a couple oforganizations which have done
that already they see a hugeincrease in in, in in time
savings in these first drafts,getting them out or even as far
(09:08):
as going to test, uh, emergencymessages, um, so there are a lot
of applications that one couldthink of, but it's about
starting small, taking one thingthat you think well, maybe
there it can make a difference,and then testing it.
Tom Mueller (09:22):
So one of the
challenges then early on is just
educating the AI agent aboutyour way of doing things.
And you generally do that byjust feeding it copies of your
existing plans or past plans,maybe copies of your press
releases, and then it will sortof interpolate all that data and
(09:46):
once you ask it to outputsomething, it's going to rely on
that data to do that.
Philippe Borremans (09:52):
Exactly.
But that, tom, that means andthat's another, how would I say
aspect that I see from time totime that means that your plans
need to be in order, that yourSOPs need to be in order, that
your system should run withoutAI.
And so I think it's a niceaspect of trying to work with AI
(10:14):
, because it forces you to lookon the inside and oh yeah, that
crisis plan from six months agois not really up to date anymore
.
Oh, that SOP.
Well, you know, 10% of thesepeople have the company.
So I think it's good, becauseit forces at least people who
want to go into that directionto turn inside and look at okay,
(10:37):
what we have now.
Is that good enough to feed thesystem?
Because, of course, if youtrain it on outdated information
or not optimal SOPs or whatever, then the output will be based
on that old information.
Tom Mueller (10:55):
So you will not get
the value.
That takes us to a very oldadage garbage in garbage out
right, yes, yeah, interesting.
Yeah, it's a really good pointyou make about sort of spring
cleaning your crisis plans andmake sure things are up to date.
But you could make a case toofor dump it all in there, spit
(11:17):
out a new plan and you know, nowyou can see, you know here's
some new things.
Philippe Borremans (11:22):
Here's where
your gaps are, so you could use
an AI which is then connectedto the web, maybe to you know,
public information, researchdatabases and say look, this is
my plan in my context.
What are the gaps?
How can it be improved?
So I do this often to doublecheck.
(11:43):
So I write plans or do stuff,and by using AI as well, but
then I ask the AI again nowcheck the work, be very critical
.
Have we missed something?
Have you overlooked something?
What if scenario?
So it's very good at that, butthen that means that it, of
course, can check otherresources outside of the
(12:03):
organization.
Otherwise, you stay in yourbubble.
Tom Mueller (12:06):
So when you're
doing that, when you're checking
it with a broader database,then is that's?
Is that on something like achat, gpt or another language
model?
Philippe Borremans (12:18):
yeah, it
depends on so different language
LLMs, large language models andchat interfaces have have
different I wouldn't call itspecialties, but they're better
in certain things.
For instance, cloud fromanthropic is the best writer.
We know that according tobenchmarks is the best writer.
So if you want you know goodwritten stuff, then claw this is
(12:41):
is maybe the go-to placegeneralist approach with, uh,
deep research now as well.
Um, you have chat gpt verypowerful model.
It's still one of the mostpowerful models out there.
Perplexity, for instance, is aninteresting one because it was
the first one which was alsotrained on scientific database,
(13:02):
so it was the first one whichgave you references that you
could double check, which isimportant.
So it depends on what you wantto do.
Generally, I would start offwith perplexity.
If I'm using the publicavailable ones, I would use
perplexity to do the researchthat I can double check the
links and where the informationcomes from, and I know that I
(13:23):
can tell perplexity only to lookfor scientific papers and
things like that.
Then I would go over to chatGPT to maybe brainstorm things
and maybe question stuff andhave it think really through and
if the output would then needto be I don't know an article or
a paper or an e-book orwhatever.
(13:45):
I would use Claude as well,because it's a good writer for a
first draft, and I always endwith another small tool, which
is Instatext, and Instatext isvery good at correcting my very
bad written English.
My mother tongue is French andFlemish, so I always and
(14:08):
although I always worked inEnglish, but still I want to be
sure that I'm not making stupidmistakes in the language, and
then I have to pick BritishEnglish or American English, of
course.
Tom Mueller (14:19):
But that's quite
the series of different tools
that you're using, each of thoseAI based, each of which brings
a slightly different sort ofarea of focus or expertise.
Then sort of area of focus orexpertise, then, for the average
person who wants to jump in andsort of practice or try these
out, do you have to subscribe toeach one of those tools then to
(14:41):
have access?
Philippe Borremans (14:43):
No, no, all
of them have a free version.
And already, if it's your firststep, definitely start with the
free versions.
All of them are accessible inthe free version.
Once see a bit the the power orthe potential application, you
could go for a subscription.
Most of them are like 20 amonth.
It's not the end of the world,but they do give you, um, let's
(15:05):
say, more power, more computingpower, more thinking power, uh,
reasoning power.
So you do see a difference.
But for the first steps thatwill open, the free versions are
definitely worth checking themout.
Tom Mueller (15:20):
So one of the
things that concerns me as I
think about that if I wanted todrop in some crisis plans to use
as a basis for developingfuture plans is the
confidentiality issue right.
Do I need to scrub those plansof you know sort of company,
specific references or phonenumbers before dropping them
(15:42):
into a large language model?
Philippe Borremans (15:44):
Well, the
reason you would have to do that
is because your company hasthat rule in place.
Your company has that rule inplace because there's a lot of
um hype around ai and peoplethink, oh my god, you know, I
put the plan in and it shows upsomewhere else.
That is not the case.
It gets split up in a zillionpieces and everything so.
But of course, you have yourcompany rules and, uh, it's.
(16:05):
It's very easy to take a plan uh, take out all the the
references company name,telephone numbers, of course,
name of people in the crisisteam but just keep your plan as
such, edit it to take out allthe confidential things, and if
then your company allows to usethat because it's a simple
framework of a plan, then youcan work with that.
(16:28):
The other approach is simply inyour you, when you chat with
the system is describe yourorganization.
You don't have to use the name.
You could say um, I'm workingfor a major uh technology
company.
Um, we're looking at uhgeographic scope would be the us
(16:49):
.
Uh, we're looking at certaintypes of potential crises, let's
say a cyber attack.
So you could make it like thatas well in your prompt and then
it will work with thatinformation, and so you can
fine-tune it so that it's veryrelevant to your own environment
.
Tom Mueller (17:10):
And then it will
search its own existing database
of other plans, other data thatit's learned from already, and
come back to you with somethingthat, from what I hear generally
, is a pretty good, pretty goodstart.
I have a very small experiencewith that, bill that the graphic
(17:32):
that I have as my backdrop hereI generated using AI.
It was like 18 months ago, andI think it took me more than 50
different prompts to try and getthis image, and it was such a
frustrating process, right?
So there's something youmentioned earlier about you know
(17:55):
learning how to talk to themand how to write good prompts,
and that's something that Ithink is going to become a real
skill set for practitioners andpeople working in the crisis
space.
Right, the better you can dothat, the more quickly you're
going to engage and getmeaningful results.
Philippe Borremans (18:16):
Well,
there's two schools in that,
because it's a topic that comesback every couple of months that
you've got a school which says,in fact, your prompting skills
are, or will soon be, not worthanything because the systems
will get so smart that actuallyyou can just type in what you
need and it pops out.
(18:36):
And then you've got a schoolwhich says, no, I mean, actually
the structure of your prompt is, there is a system behind that,
because you're getting closerto the thinking language, so to
say, of how these systems havebeen built and the latest, let's
(18:57):
say, debates.
I've seen that now, with thenew version of ChatGPT coming
open, it's the organizationsthemselves, like OpenAI in the
context of ChatGPT, who said no,prompting is still important,
it's getting easier, but thestructure is still important
because you'll get betteranswers.
You will always get an answer.
(19:18):
That is not the issue.
The thing will answer.
But if you want a good,structured answer where you can
work with and it takes intoaccount what you really would,
then yes, you need to have thoseprompting skills and they're
still relevant today.
Maybe in six months not anymore, because we know how fast these
things evolve, but still Ithink.
Apart from that, it's also aninteresting exercise for
(19:40):
yourself, knowing actually whatdo I want as output.
How does it need to bestructured?
What is the context?
What is the audience?
Because these are all thequestions.
Does it need to be structured?
What is the context?
What is the audience?
Because these are all thequestions that you need to ask
yourself if you want to write agood prompt, and I think it's
for communicators, I mean, thoseshould be always your basic
questions who is the audience,what is the format?
What is the context, what arethe things that move around that
(20:04):
can be important?
So I think it's apart from justthe technical prompting aspect,
it's a good way of organizingyour thoughts as well.
Tom Mueller (20:13):
Yeah.
So those good basicpractitioner skills come into
play here, but applied to a newtechnology, trying to make sense
of it, I'd like to throw acouple of just ideas at you and
get your thoughts on how quicklya response team today could
leverage AI to do some of thesethings.
For example, translations Pressreleases going out you need to
(20:38):
translate it into two or threedifferent languages.
That's something AI could dovery quickly today, right?
Philippe Borremans (20:45):
Yeah, with a
human in the loop for a final
check, definitely, but we knowall these cases where federal
agencies send out riskcommunication messages but
forgot that 30% of thepopulation was Spanish speaking.
We know these cases, we knowthat it goes wrong, so now there
is no excuse anymore.
Of course, we're always thehuman in the loop.
(21:06):
I'm very much from that ID.
Yes, it helps me enormously,but I'm always the last one to
check and I think that's a goodapproach.
Tom Mueller (21:15):
Yeah, is there a
difference between Google
Translate and dropping adocument in there versus
dropping it into an AI agent, oris it really same technology?
Philippe Borremans (21:26):
Well, to be
honest, I don't know, because I
haven't used Google Translatefor years.
Now I use DeepL, which is aGerman translation model which
uses AI, and to me that is onthe level of what I need.
I can actually translate fulldocuments, powerpoint slides,
what have you?
(21:46):
With minimal correction, andit's very, it's very impressive.
Tom Mueller (21:51):
And what's the name
of that tool?
Again, d-e-e-p-l.
Yeah, how about?
Another task might be socialmedia monitoring.
Is that something that AI isset up to help us with?
Philippe Borremans (22:06):
Yeah, so I
mean, the usual suspects of the
social media and mediamonitoring platforms all have
incorporated AI since a coupleof years now and of course, it's
getting better and better now,or more powerful, but they
already were looking at thistechnology before it became a
big thing two years ago.
So it's already ingrained in theback end of those systems, so
(22:29):
that will evolve and get better.
Another thing is that if youand I think that's the beauty of
AI as well is the, let's say,the low cost approach so not
everybody has a couple ofthousands a month for a social
media monitoring platform oreven a simple media monitoring
platform well, actually, the 20a month systems out there can
(22:53):
actually do a very good job ifyou know how to prompt it and
how to instruct it.
You could actually upload theurls of your online coverage,
for instance.
It will give you a sentimentanalysis, it will give you these
things.
So if you know how to tinkerwith that, yes, uh, it, it is a.
It is also a potential approachto do it yourself instead of
(23:15):
relying on the big platforms.
But of course, for manycompanies, they already have a
contract in place and there theai is just in the back end.
Tom Mueller (23:22):
You don't see it,
but you know it's there and it
does its work, yeah so if you'rea a sole practitioner out there
working with small clients andyou want to sort of be able to
monitor social media efficiently, some of us use different tools
to monitor X, for example, andTweetDeck was one of those very
(23:42):
popular You're saying you couldjust go into you know X's
version which is Grok now, Ithink and give it some prompts
and have it running a search foryou over time.
Philippe Borremans (23:59):
Yeah, so
many of the generative AI
platforms now have somethingcalled or close to tasks or
repetitive action.
So ChatGPT has that now, so youcould actually create a task
and say look, every morning at 8, I want to get an overview
based on these keywords.
Now there's a big caveat it'sstill very much touch and go,
(24:24):
not always the best results.
So my approach would rather beI'll have through RSS feeds,
which is a very old technologybut still one of the best out
there.
You know, yeah, I mean, let'sface it, old stuff really works.
I would pull in coverage basedon keywords and then I would
(24:46):
feed that into a Gen AI, um, agen ai system, and say, look,
this is my input.
Now you go and analyze it.
The finding stuff is still abit difficult for these things.
Uh, perplexity has been nowknown for finding good coverage
and and and some recent coverage.
(25:06):
But still, I would either relyon the professional platforms or
tinker a bit more with RSSbased search and then feed that
in and really prepare it for theAI and say, look, this is my
input today.
Do the analysis give me theresults?
Tom Mueller (25:23):
And then it will
provide you some kind of a heat
analysis or sentiment analysis,all these things that are
possible with other platforms,yeah.
Yeah, interesting.
It's good to know some of thoseold tools are still relevant
today.
We've already talked aboutdeveloping plans.
How about in the trainingscenario development areas?
(25:48):
Do you find applications todayfor helping to develop scenarios
or training plans?
What's your vision on that?
Philippe Borremans (25:57):
So that's a
big part of my work, what is
called an incapacity building.
So I'm running simulations, soI've got I trained my own
virtual sidekick, and so that isone that I use to get my first
drafts of simulation scenariosout or training courses and
(26:21):
things like that.
Another one, and I'll put thelink in the chat if you want.
I created a small LM and thisone is open to anyone who has a
chat GPT login and it's, in factI call it the crisis role
player.
So actually you open it up andit will kick off with a couple
(26:45):
of questions and, based on yourresponse to those questions, it
will run a scenario for you andit will ask you how do you
respond, how will you react, andat the end, when you've done
playing the game, so to say,you'll get some feedback.
So it might seem a bit crazy,but it really helps.
(27:06):
I use it in face-to-faceworkshops just to show one the
potential application of AI, butalso people, by playing that
role-playing game, so to say,through an AI system, both
understand the AI but at thesame time test their knowledge
or their reactivity against somevery, sometimes very difficult
questions that the AI system isasking you.
(27:28):
And again, this is, of course,pre-trained, based on a
knowledge base et cetera, sothere are different applications
in that area.
Tom Mueller (27:34):
Yeah, yeah, so it's
an online crisis gaming tool.
Essentially, yeah, it's a roleplayer yeah.
Philippe Borremans (27:41):
It's fun to
do and people like that.
But it is a serious game, so tosay yeah.
Tom Mueller (27:47):
That's really
fascinating for me, philippe,
because when I first well, whenI worked in crisis training for
BP for many years and I wasalways trying to figure out how
can I test individuals Exercisesyou do, it's big groups of
(28:13):
people, individual roles areimportant, but it's hard to
really focus on individualpeople, and a tool like that
actually might give you anopportunity to individually test
and prompt those folks you knowin a variety of programmable
type scenarios.
Right?
Philippe Borremans (28:25):
Yeah, yeah,
and that is it's.
First of all, it's, um, it's asafe zone, right?
I mean, people know they'reinteracting with an ai, so it's
fine, but still, it gives youfeedback, it tells you.
Well, you know, this is maybetoo early, that first reaction
you gave there, maybe.
So it really gives you feedbackand it's very individual, or
you can play it in a group, butit's made for one person playing
(28:47):
through the screen, um, and so,yeah, it's, it's, it's fun.
Again, that's also the power ofai.
It's I'm not a programmer,although I spent 10 years at ibm
but as a public relationsmanager, so sure, I've got a bit
of a technical feel or, let'ssay, an it field, but I'm not a
programmer.
I never learned to code orwhatever and I can make these
(29:08):
things in.
I mean, it took me maybe fourhours just preparing the
knowledge base and then theinstructions and testing it and
trialing it, um, so it's verylow key, low cost and and
quickly deployed, uh, and nowit's it's available to all.
So I I gave you the link, so ifyou want, to yeah, I got, I got
it, thank you.
Tom Mueller (29:26):
So, philip, to
summarize in terms of AI, so
your best advice for companieswho are, or even municipalities,
government entities who aretrying to think about how they
engage with AI and get on thefront foot, what's your sort of
(29:50):
best recommendation for gettingstarted?
Philippe Borremans (29:53):
Well, I
think, start very small.
Invite a couple of people andwe're talking maybe about the
crisis management team or thecommunicators.
I'm more a crisiscommunications person but invite
a couple of people in a safezone, try out the free versions
with no confidential information.
Just play around with it, seewhat it gives.
Maybe follow a couple ofprompting techniques YouTube or
(30:16):
webinars and then start to playwith it, because keeping it away
is pretty dangerous because itwill impact our work Anyway,
it's already impacting our workand there's no way that you know
we're going back to a worldwithout AI, probably.
So start by very basic thingsplaying around with it, see what
(30:37):
it can do, both from a contentcreation part, but also use it
as a brainstorm partner, as acritique, as someone who puts
holes in your plan.
Plan, but do it all in a safeenvironment, no confidential
information.
Once that you then have an ideaon where it could make the
biggest difference from aproductivity point of view, from
(31:01):
a speed point of view, then Iwould say organize a what what
we call a sandbox environment, alittle piece of the server in,
of course, in full collaborationwith the it people that you
have.
Create a sandbox, meaning it'sall independent.
It sits there.
You've got 100 control downloadin open source llm.
(31:21):
Again, there are a couple ofthem which are really
interesting.
There is one of coming out ofFrance, mistral, which is very
powerful.
It's open source, you candownload it.
And then just begin with a verysmall, very specific use case,
not the big.
Oh, we're going to create acrisis management AI system.
No, no, no.
(31:41):
A very specific, small use case.
Test it out, try to run it, seehow it works, correct it and
then build on top of that.
I think that's the best wayforward.
Um, and and I do believe it'salso an important way forward
because I do think, once we gointo private companies, um,
governments, local, national,what have you?
(32:03):
You need to have that control.
And let's not forget somethingelse In Europe, we have the EU
AI Act.
It's the law.
You cannot just play around anddo whatever you want, and we
see the first signs now in theUS as well, where states are
implementing at least guardrails, right.
(32:26):
So you'll have to betransparent in what you're doing
, and having that 100% controlis, of course, the best approach
, but it also allows you to workin a safe environment, in a
sandbox environment.
Tom Mueller (32:46):
Thanks for being
very specific and keeping it
simple for those of us who arereally trying to figure it out.
Hey, if our listeners want toget a hold of you, to tap into
your deep well of advice, what'sthe best way for them to do
that?
Philippe Borremans (32:59):
Well, I'm
very open to connections on
LinkedIn.
You can find me, PhilippeBorremans, on LinkedIn.
And then I think, yeah, themost output I do for the moment
is through my weekly newsletter,which is Wag the Dog and that's
wagthedog.
io.
The dot com was taken, but,yeah, and that's where I publish
(33:22):
every week an article on risk,emergency crisis, comms at the
intersection of technology andAI, and then there is a podcast
version, but that's my two AIavatars male, female who, in
fact, narrate my articleautomatically.
Tom Mueller (33:39):
the best place to
find me yeah, all right, and we
will include some of thosedetails in the podcast notes
here.
Terrific, philipp.
Thanks so much for joining us.
It's been a really funconversation.
We look forward to having youback again soon.
Thank you, tom.
Thanks for the invitation,bye-bye.
A quick publicity note here forthe Leading in a Crisis podcast
.
We've gotten some recognitionrecently by the folks over at
(34:03):
Feedspot, who have ranked thispodcast as number 13 in the
state of Texas in their listingof management podcasts.
So congratulations to the teamhere at the Leading in a Crisis
podcast for that ranking andthat recognition by the good
folks over at Feedspot.
Feedspot, if you're notfamiliar, is just a company that
(34:25):
aggregates global media outletsand content creators and makes
those lists available for otherswho may be looking for content
or to engage with creators.
So thanks to Feedspot for thatrecognition.
And that's going to do it forthis episode of the Leading in a
Crisis podcast.
Thanks for joining us.
We'll see you again soon.