All Episodes

November 20, 2025 59 mins

Send us a text

Artificial intelligence is reshaping how we train for emergencies — from the way we design crisis exercises to how we think about communication, collaboration, and creativity under pressure.

In this episode, OSRL’s Ines Costa joins host Emma Smiley to explore how AI is helping transform preparedness. Ines shares how her background in marketing and digital media led her to experiment with AI tools in training and crisis management — and how those early tests evolved into fully immersive exercises now used across OSRL.

Together, they discuss what AI can (and can’t) do, the ethics of using deepfakes in drills, the power of storytelling in technical training, and why experimentation is key to innovation.

Whether you’re in emergency response, communications, or tech, this conversation offers a grounded look at what happens when human creativity meets machine intelligence.

If you enjoy the show, please follow, rate, and share The Response Force Multiplier — your support helps us keep exploring the frontiers of preparedness and response.

Please give us ★★★★★, leave a review, and tell your friends about us as each share and like makes a difference.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 5 (00:01):
The fire's spreading and the leaks getting worse.
What do we do?

Speaker 2 (00:06):
The Coast Guard has launched a level two response,
deploying firefighting vesselsand oil boom barriers around the
site.
Investigators are now examiningthe cause of the collision.

Speaker 3 (00:18):
Boss, we're trying to contain the leak.

Speaker 4 (00:20):
There's backup equipment on the port side.
Try that one.

Speaker 3 (00:25):
What you just heard was AI-generated simulation.
There was no fire, no spill,just a digital scenario helping
OSLR teams prepare for the realthing.
This episode, we look at how AIis transforming emergency
training and what happens wheninnovation meets readiness.

Emma Smillie (00:56):
On the Response Force Multiplier, we bring
together compelling experts andthought leaders to provide a
fresh take on key issues andcutting-edge techniques in this
field.
In each episode, we'll diveinto one aspect, and we'll use
OSRL's unique pool of expertsand collaborators to distill
that down into actual tools andtechniques for better
preparedness and response toincidents and emergencies.

(01:18):
My name is Emma Smiley.
We are OSRL.
And this is the Response ForceMultiplier.

Speaker 3 (01:29):
Today on the Response Force Multiplier, we're
exploring how artificialintelligence is reshaping the
way we prepare for and respondto crisis.
My guest is Inez Costa, anexecutive here at OSLR.
She brings a fascinatingperspective.
She started in marketing andcommunications, but her
curiosity about AI led her intothe heart of crisis management.

(01:50):
Together we talk about how AIis helping teams design more
realistic exercises, bridgeskill gaps, cross departments,
and make emergency training feelmore like real life.
We also dig into thechallenges, the skepticism, the
ethics, and the environmentalcost, and what it means to
experiment responsibly with newtechnology.
This is a conversation aboutinnovation, creativity, and what

(02:14):
happens when someone fromoutside the technical world
brings fresh energy to the fieldof preparedness.
Let's get to it.

Emma Smillie (02:21):
Hi Inesh.
Welcome to the ResponsiblyMultiplier.
Thanks for joining me today.
Uh let's start by just youtelling me a little bit about
yourself, who you are, and whatyou do at AerosRel.

Ines Costa (02:34):
Hello, thank you so much for having me.
This is really exciting.
So I'm Inesh Costa.
I work at OSRL as marketingexecutive.
So actually, I've helpedpromote some of the previous
episodes of this podcast.
So it feels like an honor tonow be the guest on it.
It's so cool.
But yeah, so I help with OSRLslike social media and email

(02:57):
marketing and just promoting whowe are like as a brand and what
we do.
But in the last year and ahalf, I would say, I've also
been involved with supportingour incident and crisis
management exercises and kind oflike bringing a little bit of
digital transformation into thatworld.

Emma Smillie (03:16):
And that's what you're here today to talk about.
So we're specifically gonnatalk about the work you've been
doing with AI and AI andexercises and also in marketing
and commons.
But let's take a step backfirst of all.
So what sparked your interestin AI to begin with?

Ines Costa (03:32):
So I've actually was using AI even before I started
with OSRL almost like two yearsago.
So when Chat GPT was firststarting to become a bit more
popular and people were startingto use it for all sorts of
random things.
So yeah, I mean, I guess likethe cover letter that you read
and led to you giving me thisjob.

(03:53):
ChatGPT made it, sorry to tellyou.

Emma Smillie (03:56):
Oh no, I even tricked that, to be honest,
probably.

Ines Costa (04:01):
But uh yeah, sorry about this revelation time.
Yeah, no, I definitely I thinkI started using it initially
with stuff that I was strugglingwith.
And in this case, I thinksometimes translating, you know,
those skills and all thesethings that you want to say,
especially for someone who'slike not a native English
speaker, many times like thatextra support in bringing things

(04:23):
that I would easily be able tocommunicate when I'm speaking,
but sometimes when I'm writing,it doesn't come across as fluid.
So it kind of supported meinitially with that sort of
stuff.
But then once I started uhworking for SRL, and even in
some of my freelance work that Isometimes will do outside, many
times you find yourself workingin really small teams where you
know you have to wear manyhats.

(04:44):
Um, and you know this, like atOSRL, you're in our marketing
team, you know, one day you'retrying to make a strategy for
social media, and then you'retrying to improve our email
marketing, and then you'relooking at SEO.
These are all like highlyspecialized things that I think
AI was really like a good way tokind of adapt to that and

(05:05):
support with getting insights tobe able to perform on all those
like different specialities,really.
I think coming from abackground where I was working
more in the creative industrieswhere the style and the language
is very different.
And then I came into be, butalso like you know, a very
serious kind of industry wherethe language was very different,

(05:26):
the way that I communicate,that I had to communicate now,
especially in our marketing, youknow, like is very
client-facing.
And, you know, it's not like Ican turn around to anyone and
just say something like, youknow, Slay Queen or something
like that.
No, you actually have to likeuse a different type of
language, have to communicatevery differently.
And again, AI was somethingthat was there to support me in

(05:48):
bridging that gap a bit.
I feel like I used it in somany things, and being able to,
through OSRL, actually getaccess to external training
where I got to learn a lot aboutthe different ways that you can
use it is really opened uppossibilities for how much I can
use it for.

Emma Smillie (06:04):
So, what's your secret then?
Because I would never have saidyour cover letter was AI from
looking back at it.
How do you approach using AIbut then making sure that it's
still, I guess, I don't know,your voice or just not say like
you read some things and you'relike, oh, that's definitely AI.

Ines Costa (06:25):
Oh, 100%.
I think my approach to it islike I literally speak speak to
it like it's a human being.
So if you see like theconversations that I have with
the app, the chat GPT on myphone, like I always use, for
example, the um speaking button,like that option at that
feature using your voice.
And I'm just literally chattingto it like it's a person.

(06:45):
And if it's giving me like aresponse, for example, that
feels like classic ChatGPT sortof reply, then I'll actually
turn around into it and say,This is trash.
This is like you're talking tome like you're a robot.
It doesn't sound authentic atall.
Like, do you even know me?
We've been talking for so long.
How can how do you not know mystyle?

(07:06):
So really just giving itcontinuous feedback to teach it
what you like, what you don'tlike, what it should sound like,
has been really helpful inbuilding those results, that
output that actually reflectswhat you want it to achieve.
And it almost helped me build abit of trust in the tool as
well, that I don't think youhave when you first start and

(07:27):
you get those kind of likesketchy outcomes, and then
you're a bit like, this is notvery good.
What uh what is all the what isall the hype about AI?
Like, I can do better thanthat.
I think once you train it andyou actually start building that
weird relationship with adigital being, uh, suddenly it
feels different.
Yeah.

Emma Smillie (07:47):
Yeah, yeah.
Okay, but I haven't used thevoice bit so much, but yeah, I
do give my feedback.
Yeah, and it it it learns andit remembers.
I did actually change a settingtoday that said it for it to
sound like a robot on its reply,like really straight and blunt.
Turn that off straight away.
I hated that.
I didn't as a tone, it was notone I engaged with.

Ines Costa (08:07):
Um but you have to, otherwise it's gonna start doing
its own thing, and you're justlike, this is not well, you
know, we already have to managelike people in our day-to-day
life.
I don't want to also have tomanage like a robot personality.
This is too much, too muchmanaging for me.
It's not good.
Yeah.

Emma Smillie (08:26):
Yeah.
Even though I know that they'vecoded out the pleas and thank
yous in um TPT, I still alwayssay please and thank you,
because you never know.
Come the revolution of AItaking over the world.

Ines Costa (08:37):
You have to be prepared.
Yeah.
Because they actually rememberthey will have like a count of
all the times that you saidplease and thank you.
So I'm like, yeah, you have touse that.

Emma Smillie (08:47):
So you came in, AI and marketing, you you made a
huge impact at that time.
And then exercises.
How did it then trans starttranslating into exercises?

Ines Costa (08:56):
Well, it felt very random, to be honest.
I feel like I definitely wasjust dropped into it because you
know, I came into the marketingteam and I was mostly using
these capabilities for contentcreation, for video and image
editing, for design, like allthe sort of bits that is quite
useful for.
And, you know, at the time wealso encountered a bit of a

(09:19):
skill gap in the preparednessteam, more in the sense that,
you know, you have these teamsthat have very technical people
that are incredible at their jobwhen it comes to the
preparedness side of things, butthey don't really have the
skill set to introduce designelements, video, photos, all
these other things that can addvalue to it.
So that's kind of how I fellinto it.

(09:41):
That was this feeling that wecould make our exercises a
little bit more engaging byadding that element of videos,
of photos, simulating modernmedia.

Speaker 2 (09:52):
Good evening.
We begin with breaking news offTaiwan's western coast.
A service vessel has collidedwith an offshore substation at a
wind farm, resulting in a fireand raising concerns of a
million oils.

Ines Costa (10:04):
Didn't really have the capabilities in-house to do
it at a time.
And that's when I thought,well, if I'm using AI for all
this other stuff in marketing,surely we can do it for
preparedness as well.
You know, if I'm using it tobridge my own skill set gap in
marketing, the people inpreparedness can use it to now
be able to edit a videothemselves too, or create a cool

(10:26):
image that reflects thescenario they're playing out
with the exercise participants.
I think in the beginning itfelt a little bit weird because,
you know, not coming from thattechnical background, not I've
never done like an exercise, anemergency response or anything
before.
I wasn't really sure how I wasgonna add any value to it.

(10:48):
So it was very casual in thesense where I was like, okay,
here's like what we can createif you're interested.
And we've come to discover thatI think there are so many
skills outside of the technicalresponse or training delivery
that are actually so valuable tocreate that really holistic

(11:10):
experience, I think.

Emma Smillie (11:13):
Yeah, I mean, I liken a good exercise to a good
story.
You have to be able to tell astory with it, don't you?
In the same way you domarketing communications, it was
all about how you tell thatstory.

Speaker 2 (11:23):
The collision happened this evening at one of
Taiwan's main offshore windfarms.
All crew members have beenaccounted for, though once
sustained minor injuries.

Ines Costa (11:32):
Yeah, I think what we discovered is that it
actually kind of feeds then intoevery element of it because in
that way you start from thebeginning, from the very
beginning, even when you'restarting to plan anything,
suddenly you start finding, oh,I could use AI to maybe analyze
all of this data from theclient, all this data from their

(11:56):
operations, their history,things that happened in the
past, current issues they mighthave, or even their long-term
goals.
Now we can also analyze thelocal legislation, the new legal
requirements, all this like bigsets of data that if you are a
single person, you're suddenlylike, oh, how am I gonna analyze

(12:16):
all this?
It's so much.
And suddenly you can quicklyanalyze all of that, and all of
that feeds into that story, thatnarrative that you're creating.
And it feels so real, it feelsreally authentic because you are
truly reflecting like the reallife scenario instead of me just
being, you know, I'm a randomperson in the UK building a
scenario for the Caribbean, andI have no context whatsoever of

(12:40):
what's going on there.
So whatever it is that I'mgonna come up with is always
gonna feel a bit disconnected ina way.
So I feel like using AI at thatstage, especially, really feeds
into that building of anarrative.
And then the visual elementsthat come afterwards, like the
cool videos, the images, thesocial media posts, anything

(13:00):
that you create, it just adds toit.
You know, it the visual almostlike brings it to life in a way
that feels a bit more immersivefor sure, because you know, we
tend to be very visual beings,whether we like it or not.
So seeing it, it feels like,oh, this is the real thing.
I can see what it would looklike.

Speaker 2 (13:18):
Emergency crews are battling flames while responders
race to contain a potentialmarine fuel leak.
Authorities say the vessel mayhave been carrying marine gas
oil, a serious pollution threatif containment fails.

Emma Smillie (13:32):
Did you experience um any challenges or skepticism
when you started gettinginvolved with exercises in AI?

Ines Costa (13:39):
Yeah, absolutely.
There was a bit of resistanceon something that feels like two
sides of a coin in the sensethat on one hand, you know, you
come into these projects, andI'm like quite new to the OSRL,
and also like I come from themarketing department.
So everyone's just kind ofasking, what exactly are you
doing here?
Why are you involved with likean incident response exercise or

(14:01):
whatever it is that we'redoing?
So it felt a bit like having toprove your value, to having to
prove that there is a reason whyyou're there, that you're
bringing something to the tablethat so far the technical team
hadn't had the chance to do justyet.
And then the other side of thecoin is you get thrown into it
because now you're in it, andsuddenly every it's not like

(14:21):
anyone stops to teach you what'sgoing on.
They're just kind of like,well, you're in the team now, so
they just keep everyone justkeeps acting like as if you had
the exact same level of likeknowledge to start with when it
comes to the technical knowledgeof an exercise and a response
operation.
So it's like you're provingyourself, and at the same time
you're treated the same, youknow?

(14:42):
So it was a bit of having tolearn about all of it by being
thrown into the deep end, justkind of understanding through
practice and through experiencehow exercises work.
Having the knowledge of howthese AI systems work is almost
like you're learning andidentifying at the same time.
Oh, we can improve this bit byadding this element.

(15:04):
We could fix that issue or thatlack of communication or
anything that's going on byusing these different skills.

Speaker 5 (15:11):
Boss, the backup equipment is a really old
system.
Only Travis was trained on itand he's on leave.
No one knows how to use it.

Ines Costa (15:19):
So that's then suddenly when you can prove your
value, when you can actuallyshow this is how we can make it
better.
I saw that pain point that youhave.
This is how I can contribute tomaking it better.
So I think on a personal level,that's sort of my experience.
But I think the resistance interms of a accepting AI, there's
just a resistant to resistanceto it as a concept.

(15:41):
I think it's very fair thatpeople who have done been doing
this for like decades, who'vebeen out on the field getting
that getting that like real lifeexperience.
Suddenly you come and you showthem, you know, a bit of
software that, okay, this thiscan do the same job as you can,
which obviously obviously is nottrue, but you can understand

(16:02):
how someone would be like veryskeptical and you know a bit
apprehensive as well.
So that's why it's the process,you know.
You start with little thingsand you show people like the
benefits that it can bring, andslowly, slowly you can see
people turning around andactually thinking, okay, I can
see how this can help me.

Emma Smillie (16:23):
Okay, so let's touch on the process then,
because um you talk about littleby little, that sounds like
your approach is more aboutexperimenting and kind of rather
than sort of starting out withright, this is what uh kind of
the solution is to a problem.

Ines Costa (16:39):
Is that right?
Yeah, there's like bits to it,I guess.
I think my approach to it isvery much like I don't know.
I don't know if it's if it isfrom coming from a more creative
background, but it's very muchlike the heart of it is
experimentation.
It really is you you just messaround with things and see what

(17:00):
happens, really.
There's no point in adding likeelements of like technology to
it just for the sake of it,right?
So I think a way a processthat's been working really well
for us is been you know, youidentify an issue or something
that isn't really quite working,and then that's when you start

(17:20):
experimenting, you start lookingat what's out there, what tools
are out there, what solutionscould exist, and you just start
like trying it out, just reallylike without many expectations,
without trying to force it in aspecific direction.
Um, I think that's reallyimportant, is like going into it
with an open mind and seeingwhat comes back.
I think one thing that wefound, like trying it out

(17:44):
sometimes with very technical,like technical focused people,
is that things have to be done acertain way very strictly.
And that goes, I think, a bitnaturally against what these
tools are offering.
Or it just goes against likeinnovation, really.
You know, if you're trying tomove forward, you need to be

(18:05):
open to other ways of doing it.
Like that process that you'reused to doing it for 10 years,
maybe it's gonna look a littlebit different now.
So I think experimentation isat the heart of it where to come
up with a creative solution,yes, you have to identify the
issue and you have to beopen-minded to what it can look
like now.
So yeah, at the end of the day,it's not just for the sake of

(18:28):
it, is with the intent of fixingan issue, with the intent of
making, you know, our lives alittle bit easier, really.
That's what technology shouldbe there for.
So the human element is alwaysthere, but it's always like
about yeah, just trying it outand see what happens, really.

Emma Smillie (18:47):
Yeah, and I think that's quite hard for a lot of
people to get their headsaround, actually.
I think in marketing, we'vebeen doing it for quite a while.
Yeah, yeah.
You try something out, itdoesn't work, you you try it
again in a different way.
It's just the way we're wired,but it can can translate quite
differently in other areas.

Ines Costa (19:04):
Well, which I understand, because if you think
about, I'm just thinking likewhen I do work with people from
uh Swiss, from Subsee, forexample, sometimes you're
talking about like reallyserious stuff, you know.
We're talking about like ifyou're not quite detail-oriented
and very technical and veryprecise with things, this could
have like catastrophicconsequences, right?

(19:24):
So you can understand howpeople would get into that
mindset, but it's also aboutknowing when it's not that deep
with other things, you know.
When you were talking aboutchanging up a little bit the
process that you follow tocreate an exercise, it's
different.
There is more freedom toexperiment, there is more
freedom to try it out a littlebit different, and being open to

(19:46):
errors.
I think being open to failureand actually realizing, you know
what, this doesn't actuallywork.
We tried it, let's find a newway.
I think that's reallyimportant.
I don't think you're you haveless value for trying something
new and it doesn't really workout.
That's okay.

Emma Smillie (20:06):
What other skills do you think do you think would
people need to I guess learn AI,start with AI?

Ines Costa (20:13):
Is it anyone can do it?
Uh I I mean, I definitely,definitely think so, because I
mean, from my own background,you know, I don't come from a
scientific or technicalbackground, nothing IT related,
nothing like that at all.
You know, I come from a verymuch more like creative
background.
I even like, you know, wastraveling for a few years and I

(20:34):
was like totally off the gridand like barely using a laptop
or you know, anything like that.
So now being in a positionwhere I'm like leading certain
conversations internally at OSRLthat are related to like
artificial intelligence, feels abit like I'm a bit out of
place.
But I think that's the beautyof it is that anyone can do it

(20:56):
is, you know, as long as youhave that curiosity, that
openness for learning somethingnew.
Um, I think for me there was abit of like maybe a little bit
of like almost going into ADHDhyper focused mode, and suddenly
you lose yourself in a rabbithole of learning new things
about AI.

(21:16):
So maybe there's a dose of thator being open-minded to learn
something completely new outsideof your skill set and the
knowledge that you already haveand already been building for so
many years throughout yourcareer.
I think that's a bit scary fora lot of people, which I get.
But I think, yeah, I think aslong as you're curious, curious

(21:36):
and open-minded, and as long asworking with computers isn't too
difficult for you.
As long as you can do thebasics, you can do AI for sure.

Emma Smillie (21:45):
Yeah, I actually to be fair, I firmly believe in
that.
I think you just have to giveit a go sometimes, don't you,
and not be scared of it.

Ines Costa (21:52):
I think up until now, the world of like IT and
computers like just felt verythat's very restrictive because
you had to know coding and allthe sort of stuff that is not
accessible to most people, butthat's not the case anymore.
So that's why I think that itreally is like we're used to
seeing this new, like thisinnovations as something that

(22:14):
only a select few can do with,and you have to study and you
have to do all this sort ofstuff.
And it's not like that anymore,I don't think.
Okay.

Emma Smillie (22:23):
Um, you've touched on the ability to of AI to
bridge skill gap.
Well, you want to talk aboutyour own skill gap and how it
bridges skill gaps.
Um, maybe could you explain alittle bit more how that
actually works in practice andhow you can use it to connect
information across teams?

Ines Costa (22:39):
It's more of a reflection of like how
traditional organizations areset up.
You know, we all exist most ofus will exist in like different
departments, you know, you haveyour finance department and then
your HR and your comms, likewe're all sitting in different
departments, and everyone withintheir team would will have like
similar skill sets to a certainextent, right?

(23:00):
So it's almost like you havethese pockets working
individually, almost like amachine, you know, they're all
doing their own little thing andcontributing to something
bigger, but most times you don'thave access to that bigger
picture.
You don't really know how thatother department is actually
contributing to the biggerpicture.
So I think what AI can supportwith is connecting us all in

(23:26):
that bigger picture, having usall working in this like common
sort of like operating system,but also allow us to kind of
because yes, fine, you can havea SharePoint and I can have
access to all these documentsfrom finance, but I don't
understand what they mean.
I have no idea.
There's no insights that I'mtaking from it.
But having AI kind of bridgingthat skill set where it can

(23:47):
potentially help me analyze allthese documents and all this
data sources from across thecompany to bring me insights
that can inform my own job, myown role within that big
machine.
I think that's a massive thing.
I think it can allow us to makebetter use of the data that we

(24:08):
have.
It can allow us to make moreinformed decisions, more
connected decisions, and morecollaborative as a way in a
certain way, because I'm alsolike, now I actually know what
the other teams are doing.
Now I actually understandwhat's going on.
And I can think, oh, maybe wecan collaborate on this.
Maybe what I'm doing actuallycomplements what that other

(24:28):
person is doing too.
And we can still work together,even though we're not even
speaking the same language, bothlike literal like languages
across the world, but like skillset languages, you know, even
though we're doing verydifferent things, we can still
understand each other and stillcollaborate in that sense.
It really will allow us to gofrom that traditional, we're all

(24:52):
separated but working towardsthe same thing kind of
organization to a much more likealmost mission-focused sort of
organization, I think, where wesee the bigger picture and we're
working in a more proactive andmore collaborative way towards
the same goal, I think.

Emma Smillie (25:10):
The question in mind then is will AI drive
better cross-functionalcollaboration, or do you need
that cross-functionalcollaboration first in order to
get the most out of AI?

Ines Costa (25:22):
Yeah, I think that collaboration is not about a
tool.
A tool just like enables it.
I think collaboration comesfrom the culture that you exist
within.
I think we have to recognizethat if you don't set up the
correct tools for it and youactually have just blockers to
collaboration, then yeah, peoplewill have much less incentive
to uh to do it.

(25:43):
But when you put out the toolsout there to do it and you make
it so much easier for people tocollaborate with each other,
then it's all down to culture100%.
I think that's why it'simportant to before we before
you start like spending on allsorts of tools and all sorts of
like new gadgets and processesand whatever it is, it's all
about actually creating astructure that promotes that

(26:06):
culture and invites people tocollaborate with each other.
I think that's what we'retrying to do with OSRL right
now.
So at this point, we kind ofcreated a sort of working group
focusing on AI, as you know,because you're in it too.
Basically, we created thisgroup, which the whole point of
it is that creating that cultureof collaboration around AI,

(26:29):
we're bringing in people fromthe whole organization, like
from all this differentdepartment.
And the idea is to have thissort of collaborative approach
to AI.
So instead of reinforcing thatidea of having different
pockets, each one of them tryingto figure out how AI fits into
their own team, we're actuallybringing them all together so we
can have organizational-wideconversations about it, because

(26:51):
the solutions that I find for myteam might actually be relevant
to the pain points you have inyour team.
And you might have really coolideas that I would benefit from.
And we can actually supporteach other in implementing them,
specifically when we're talkingabout quite complex issues.
It's all about promoting thatsupport, that talking, the

(27:12):
sharing ideas.
It's not about gatekeeping,it's actually all about opening
up the conversation so the morepeople can contribute, the
better outcomes we will have.
I think when we think about AInowadays, we most of us without
even realizing we're talkingabout like content generation,
right?
You know, creating images,creating social media posts,

(27:34):
creating emails, whatever it is.
But I think the real power ofit lies like within like data
and the way that it can, likethe management of data, the
access to it, the insights itcan bring, the way it can
connect people who are, youknow, on the ground, for
example, to the people who areplanning, to the people who are
communicating, to the people whoare strategizing.

(27:56):
It's all about connecting allof those because they're all
working towards the same thing.

Emma Smillie (28:03):
Yeah, yeah.
And that's where the powercomes from, isn't it?

Ines Costa (28:10):
Similar to like, you know, when you think about the
internet and giving us all this,like, you know, when it was
first becoming more accessibleto everyone, the level of power
that gave people and having somuch access to this level of
information.
I think AI is just anothernatural step in that same
progress, to be honest.

Emma Smillie (28:31):
Yeah, it's the power, but then there's also the
challenge, I guess, with data.
I know that's something you'vecome across again when we've
been talking about China, how weharness it better.
Maybe you could explain alittle bit about the challenges
that you've faced in exploringAI and the the s concerns, the
legitimate concerns of people.

Ines Costa (28:53):
Yeah, 100%.
I think there's, and that'ssomething that I've included in,
you know, a couple of thepresentations I've done on it,
is there are many layers ofissues with it.
I think obviously data securityis immediately the first one
that you come across with,especially for us operating in
the oil and gas industry.
We have major corporations thatare, you know, trusting us with

(29:17):
their data, with theirinformation, and we're just
thinking, are we putting all ofthat at risk for the sake of
writing a quicker email?
You know what I mean?
It really, I think we've had uhwe've been very lucky because
we do have a very supportiveteam with an OSRL who's you know
really willing to work with ourteams to find those solutions,

(29:41):
to find what is doable and whatis like a real risk that we need
to like be aware of.
So that I think would be thebiggest risk that we
encountered.
And I think meeting halfway hasbeen the way to go where we
have those teams whose job is tobe the Debbie Downer who's Says
no, you absolutely can't dothat for like legal issues or IT

(30:04):
security, data security issues,but then this is what you can
do instead.
And then being the person onthe other side who's trying to
push for these new tools, alsohaving that capability of
saying, no, actually, Ishouldn't be pushing forward
just for the sake of it.
I need to do it in a way that'ssafe, in a way that's like as

(30:25):
ethical as possible, in a waythat actually doesn't put us all
at risk in a way that'scompletely unnecessary.
So, yes, maybe I won't get thattool or that future, that
feature that I wanted exactly.
I will get this one insteadthat allows me to do however
much it can do.
And I think that's fine.
I think it's about meeting inthe middle to actually find
something that works.

(30:46):
But there's so many otherconsiderations to have, you
know, for me, like somethingthat's been really highlighted
in the last year has been theenvironmental impact of it,
which is something that actuallywas, I got a real reality check
speaking to people atInterspill who kind of showed me
like actual images, like photosof the environmental impact it

(31:08):
was having in the regions aroundwhere the data centers are
operating from.
And it's really made me thinkmore like, right, this is why we
actually need to invest ineducating people and we need to
actually be clear about what weare using it for?
Are we using it for things thatare like worthwhile?

(31:31):
Should I be, you know, using atechnology that might
potentially harm the environmentto write little emails and you
know, create little messages, orshould I use that power to
actually for big stuff, forstuff that actually makes a real
difference that brings good outof it, almost in a way that

(31:51):
like, do the two offset eachother?
I don't know, but I'm hopingthat using it for the right
things rather than just foranything I can think of actually
makes it better.

Emma Smillie (32:02):
Carbon footprint of it is quite significant, the
more data I'm reading on it.

Ines Costa (32:06):
Yeah, a hundred percent.
And I think like more and more,I don't know, more and more
legislation might come from itand to try to like stop the
overuse of it and stop thecorporations that are leading it
from just doing whatever to getmore profit.
We don't know what's gonnahappen.
But I think if we get ourselvesin that role of using it in a

(32:31):
conscious way, in a way thatactually like brings benefits,
if we're like teaching eachother how to not be lazy with it
and just using it for anysingle thing, yeah.
Who knows?
Maybe we might use it forsomething good.
Yeah, absolutely.

Emma Smillie (32:48):
You mentioned we've done a few presentations
on AI and AI and exercises.
You did a paper with Dave Rouseon Interstill, I think.
Um is that the one that ifyou're not using AI in your
exercises, what are you doing?
Is that the one?

Ines Costa (33:04):
Yeah, that's the one.
Strong title.
Strong title.

Emma Smillie (33:09):
What was the inspiration behind that one?

Ines Costa (33:12):
Once again, it was something that I was pushed into
it.
I did not think, maybe a bitwith like right at the beginning
when I was talking aboutgetting involved into exercises,
not really seeing the valuethat I could add to it.
That's exactly how I felt aboutwriting a paper for conference
in the oil spill responseindustry.
I could not see how I couldbring any value to any of this

(33:34):
because, you know, I haven'tbeen doing it for that long.
And there's people with decadesof technical knowledge there
that I'm sure have so much moreto offer.
So what am I doing here?
So I just got pushed into it,which I think is okay because I
I'm happy that I did it.
Um, because as we're we werewriting it and as we were

(33:55):
discussing all the things thatwe've been doing and all the
value we've been creating, allthe cool stuff that can come
from it in the future if wedecide to explore it more.
I think I actually realizedsomewhere along the way that
wow, what we're doing isactually can bring some change.
It could actually allow us togrow and it has applications

(34:19):
outside of exercises too.
It can really be almost like anentry point to a world of
change within oil spillresponse.
So yeah, it really was arevelation moment of okay, maybe
we do need to talk about it.
Maybe we do need to showcaseall the work that's been done in
the background, even if like80% of it was just pure

(34:40):
experimentation and seeingwhat's gonna happen with it.
And yeah, we had really likesuch good feedback from it.
It was really likeoverwhelmingly positive.
People were so happy with thepresentation, they were so
interested in it.
We had so much follow-up.
And every day we have moreclients like and members
reaching out to us saying thatthey're interested in having AI

(35:02):
input into their exercises.
So I think that's that reallytells me that we needed to do
it.
We needed to share thatknowledge and get everyone
involved and get everyonecurious about what all of this
is.

Emma Smillie (35:16):
We've talked kind of top level, but if you could
maybe dive more into the detailaround that.
So how does AI help makeexercises more engaging,
creative, tailored, kind of froma storytelling perspective, or
even with the data elements thatyou talked about?

Ines Costa (35:34):
I think we touched a little bit on it earlier.
What you know, I think the verybeginning of it is that access
to like all this different typeof information, what it really
gives us is easy access toin-depth kind of like insights.
I think in a traditionalexercise, you'd have this team,
which many times you know isvery, very small, kind of might

(35:56):
be just one person designing thewhole thing.
Being realistic, we are limitedin terms of budget, in terms of
resources.
And, you know, you might beworking halfway across the world
in a completely differentregion and trying to build an
exercise for a company that'scompletely different from where
you're used to, with veryspecific operations, with very

(36:17):
specific regulations that havevery specific goals that they're
trying to achieve with theirexercise.
So in an ideal world, you'dhave a team with a specialist in
each one of these topics thatyou have to bear in mind when
you're building an exercise.
But that's just not realistic,right?
Um, you'd spend so much timegoing through all this

(36:37):
documentation, all these papers,all these plans.
It's just not realistic,really.
So I think that's what itbrings is that capability of
like quickly analyzing all ofthese sources of information and
then bring it all into onenarrative, into a format that
fits the exercise you're tryingto deliver.

(36:58):
And just allows us to kind ofhave insights on our who our
audience is as well.
I think similar to like how inmarketing we have to know our
audience, like, you know,understand their needs,
understand their history, theirmotivation.
The reason why we do that isbecause we're trying to build
something that resonates quitedeeply with them.
So that's the same that we'redoing.

(37:19):
That's the same that we do withour injects.
So we will use AI to create,you know, scripts from our phone
calls, our emails.
We can use it to create imagesof the environment we're playing
the exercise in, suddenlyaffected by a massive disaster,
and we make it quite real and itreally resonates with the
participants.

(37:39):
We can create this sort of likemedia, news reports, social
media interactions, socialfeeds, all this sort of stuff
that actually mimics what itlooks like on that side of the
world.
We're just doing somethingrecently, for example, for El
Salvador, and you know, havingaccess to the information of how

(37:59):
the media works there, then youknow, being able to mimic that
in our exercise too.
This is all things that if youdidn't have access to AI to do
that quick search and quick,quick kind of like reasoning and
bringing it into yournarrative.
One person who's a specialistin a very specific part of oil
spill response wouldn't have thecapacity to do all that.

(38:22):
So I think that's all thatwe're adding to it when it comes
to the design and delivery sideof things.

Emma Smillie (38:28):
Have you got any other examples of exercises that
um you've been involved with,which have used AI?

Ines Costa (38:36):
Yeah, I think we've we've used it in quite a few
things at this point.
I think the first time that wehad a really interesting
reaction to it was the firsttime where we used video news
report in the crisis managementexercise for a client.
And it was like a veryhigh-level executive team,
right?
Everyone was very confident inwhat they're doing.
Oh, I I know how to handlemyself in the crisis.

(38:58):
This is easy, this is fine, wehave it under control.
And then you show a news reportvideo of the biggest national
news channel showing the face oflike we try to mimic that look
of the news report, like reallydown to every detail.

Speaker 2 (39:14):
Environmental officials confirm nearby
wetlands and habitats for theChinese white dolphin are at
risk.
Satellite and dronesurveillance are being used to
monitor any surface sheenspreading towards shore.

Ines Costa (39:25):
And then you had a photo of the CEO and criticizing
what the company is doing,really doubting their
capabilities, their competency,and just really making them look
really bad.
And suddenly the feeling in theroom, the energy changed
completely.
Suddenly, the CEO was veryserious, and everyone was like,

(39:46):
right, let's get down to work.
We need to get this undercontrol.
This is serious business now.
And just getting that feelingon how an inject like that can
really change the energy in theroom, because I think we have to
be honest here.
Technical people don'tnecessarily always create the
most exciting documents or themost exciting injects.

(40:08):
They love a blank pagePowerPoint.
So I think when you bring inthis element, those visuals that
feel so real and like quitehigh production, but it was
pretty good.
Uh, when um, you know, when youbring those elements to to
play, it really sticks, youknow, it really makes a

(40:30):
difference.
Yeah, we've used it in so manyother things.
And, you know, Emma, you werethere when we did our
interactive workshop at themembers' forum uh last December,
uh, where we gave our members abit of a taster of all these
different things we've beenexperimenting with.
And we ran a full-day workshopwhere we introduced incident

(40:53):
management and crisis managementwith like interactive elements,
with a reactive uh scenarioexercise.
And I mean, we couldn't havedone any of that without the
help with AI.
You know, I think we managed todeliver all sorts of like

(41:15):
realistic media injects.
We had personalized video newsreports that perfectly reflected
all the actions they tookthroughout the day.
There was branching scenariosas well, like anything we could
think of.
We threw it in there.

Speaker 4 (41:31):
Where is it?
Come on, where's thecontingency plan?
Boss, Alvin is badly injured.
He needs urgent medical care.
What do we do?
There's a phone number on theplan.
Let me find it.

Ines Costa (41:44):
And none of those things would have been maybe
possible, but they definitelywouldn't have been as like
streamlined and kind of slick inthe delivery as they were if it
wasn't for the support of thoselike digital tools and AI.
I don't know if you agree.
I I think so.

Emma Smillie (42:00):
Yeah, no, I agree.
Definitely.
I think just the ability todynamically change what we're
doing and um yeah, the real-timestuff, the branching, yeah.
We wouldn't have to do any ofthat without AI.
Yeah.
I mean, I I haven't I've beeninvolved in a few exercises
without AI, but now primarilyit's always got some sort of AI

(42:25):
element to it, I guess.
It just feels a bit more, Idon't know, flexible, a little
bit less rigid.
Um than when we've had it verymuch kind of scripted out, etc.
Exactly what's gonna happen andwhen it's gonna happen.
What are you what are yourthoughts on terms of the
participants' experience?
How does it make a difference?

Ines Costa (42:45):
I definitely think that is exactly what you were
saying, is like, you know, youhave this team, the prep team,
the planning team, who putstogether the script, right?
And especially when we'retalking in um imagine a larger
scenario or like a longerscenario that can last a couple
days, you know, you have thisteam who prepped so much for it

(43:06):
when you have the injecttimeline.
It can be massive, like 30pages of injects, you know what
I mean?
And imagine on hour four of atwo or three-day exercise, the
participants decide to makesomething crazy, like they have
they make a crazy decision thatcompletely alters the way that
you plan the rest of yourexercise.

(43:27):
You know, changing all of thoseinjects that are going to come
after it would be a really crazyamount of work.
And it would take the deliveryteam and the sim cell team a bit
away from that focus on theparticipants and the focus on
what's happening in the roombecause you'd be so busy in the
background trying to edit yourscenario and your inject.

(43:51):
So I think the fact that nowyou can literally just feed it
in whatever tool you're usingand say, this happened, change
the next inject to reflect that,and they can do it in like in a
couple of minutes, that changeseverything, you know.
It allows it to be moreflexible.
So now, as a participant, Iwon't have someone in the room

(44:12):
tell me, oh yeah, I know thatthat happened, but this is how
we're playing it.
Never mind that.
We're playing it like thatbecause this is a script that
doesn't feel real.
That's not what would happen inreal life.
If there was an actualincident, oh, this is what so
this is what we prepared for.
Well, too bad.
You need to adapt.
So I think our exercise isbeing able to adapt to that too.

(44:33):
It just feels so much morereal.
And you know, it almost makesyou feel as a participant.
So, for example, what weexperienced in that workshop in
the members' forum where we hadbranching scenarios and the
media injects were likereflecting what they decided to
do.
The feeling that people weregetting was, oh, every action

(44:53):
matters.
Everything that I decide to doactually actually reflects in
the real world of this exercise.
And that feels more real andthat feels more engaging.
And I think for theparticipants, that makes like a
world of difference.
And I feel confident in sayingthat because the feedback, like
you said, we've been using itquite a lot in the exercises

(45:13):
we've been delivering this year.
And the feedback from it isalways like so positive.
We always get comments on likehow great it was and how much
like it added to the experiencefor them.

Emma Smillie (45:25):
What about things like deep fake videos, things
like that?
Are we have you been using themin AI and exercises?

Ines Costa (45:34):
So funny enough, I think those are like our best
sellers.
We're just so good at it.
It's good.
That those are exactly like youknow, these videos that we've
been creating reflecting likethe media or reflecting people
from those countries that we'redelivering the exercises in.
Those are definitely the bestsellers because that's when

(45:56):
suddenly it feels less like anhypothetical scenario and more
like something that's actuallyhappening.
And it feels wrong doing italmost, you know, because you
know, when we talk about deepface, we're talking about like
using people's likeness andcreating fake news and all this
sort of stuff.
It feels evil, it feels wrong,but we're using it for good

(46:18):
because we're, you know,preparing them for an emergency.
So is it okay?
I don't know, but it'sdefinitely like our best seller,
I would think.

Emma Smillie (46:28):
Yeah, there are obviously there are ethical
implications in AI, but that'sfor a whole other podcast.

Ines Costa (46:35):
I mean, to be fair, now we're getting to a stage
where obviously through duringthe experimentation stage, we
really were just messing aroundwith it and see what we could
create.
And now that it's become almostpart of our service, we have
done as much as we can to kindof keep it, you know, to keep it
safe for the people involved.
So for example, we're notcreating avatars or deepfakes of

(46:55):
like any of our clients or anyof our employees or anything
like that.
We're making sure to protecteveryone's like likeness and uh,
you know, not creating videosthat could be dangerous.
But also like all of ourvideos, for example, have
watermarks across the entirescreen saying exercise,
exercise.
So no one could really usethose videos in a way that could
look like fake news.

(47:16):
The more we learn about it, themore we're very aware of the
fact that you need to do allthese little things to keep
people as safe as possible.

Emma Smillie (47:27):
You touched on um gamification, which I know we
kind of we did a little bit atthe AGM.
Um, have you done any more workin that area in terms of how we
can use gamification exercises?

Ines Costa (47:39):
It's still very early stages for that, I would
say, for us.
I mean, it's been really coolto see how other people in the
industry are kind of using likelow-tech solutions for this.
So obviously, we've seen peopledelivering incident management
exercises using Lego, which wasreally cool.
Or was it response, somethinglike that?
But they use like Lego for it.
And we've had like CLARM cometo USRL and deliver their

(48:02):
wildlife board game and stufflike that.
So I think now we're trying tofigure out how to bring those
experiences into the digitalworld.
So far, we only tried it withlike little workshops, small
training sessions, nothing toocrazy, but we found that
introducing stuff like digitalquizzes that have like a leader

(48:23):
score kind of board to add thatelement of competition has
really worked really well, getspeople really excited and really
invested in things thatnormally we really struggle with
engagement when we're talkingabout like ICS and IMS kind of
stuff.
Suddenly people are veryinvested in it, they're loving

(48:43):
all that content.
But translating also like boardgames into digital board games,
and um we've done somethingwith the cone of response where
we use like a digital sort oflike board map where people
teams can interact together inthe same map, and they can draw,
for example, zones of response,they can identify environmental

(49:05):
sensitivities, they canidentify according to that
what's the best method ofresponse, but all in a map where
the whole team can playtogether in the same place.
But the future, I think, willbe really interesting.
We are we already discussed andwe already started looking into
a little bigger projects.
So we're talking about likesort of gaming platforms, like a

(49:29):
gamified experience ofexercises where you have this
one platform where you haveaccess to like all your
resources and all your methodsof responding to an incident,
and then you have this boardwith an environment where you
know your incident is happeningand you can drop resources into
that.
So it's almost like oil spillresponse meets Sims or something

(49:52):
like that, which would be quitecool.
But this is all like ideas andlittle tests that we've been
running in the background, andhopefully in the near future
they can come true.

Emma Smillie (50:02):
Yeah, I guess again, another podcast
potentially about how you'dcombine like AI with VR and AR
to create a whole viscerallearning experience could be
something to explore at somepoint.

Ines Costa (50:14):
100%.
I mean, there's always likelimitations in that because
obviously, like none of this ischeap.
And yeah, it's about pickingagain, picking what's right and
picking what works and notadding tech for the sake of it,
but actually creating anexperience that adds value.
Yeah, I think we're gettingthere.

Emma Smillie (50:35):
So let's let's let's look to the future a
little bit.
In general, what are the trendsyou're seeing in the broader
kind of AI space?

Ines Costa (50:44):
I think we've seen, most of us probably have seen
that it seems like every singletool, like software or anything
that you have has an AIintegration now.
So I think that's the biggestthing is just been this
expansion into every almostelement of your life.
Some work better than whatothers, you know.
I think you can really see thecompanies who are very strongly

(51:07):
invested in their AI integratedcapabilities.
So I think that's one of thereasons why I feel like it feels
a bit redundant to be, oh, I'mtotally against AI and I just
refuse to use it.
No, is basically the main trendhappening at the moment is full
integration into all thesystems that you use nowadays.

(51:28):
But I would say the big trendscoming up are definitely the
agents.
I think that's what everyone isreally excited about.
Everyone's loving that idea ofbeing able to use text and
natural language to code anagent that will systematically
do this task for you.
I think that's very excitingfor your day-to-day life, but

(51:51):
also particularly for your job.
I think we're going now, andyou can see it with like open AI
releasing sort of their likelifestyle agent mode, where
we're kind of using, we're goingfrom that you open your chat
and you ask it, you ask for aspecific question or a specific
task, and it does sort of you gostep by step in doing it.

(52:13):
We're going now towards thatmore you go in there and you do
a more broader, more likegeneral request, and it can
actually integrate into thislike different apps that you
have, and it can the AI itselfcan break down the problem, the
request, into its many littlesteps and perform each one of

(52:34):
them almost like a real lifeassistant, you know.
So I think that's like the thereal big big trend that we're
seeing at the moment uh with theagent.
Gemini as well, they'rereleasing like the deep thinking
kind of setting, right?
Allows it to do like also deepsearch online for really
specialized type of content,which I think in our field is

(52:58):
like super important because itwill allow AI to kind of be able
to give us access to insightsthat are like less generalized
online and make it like reallyspecific to the technical
knowledge that we need to haveto perform some of the tasks we
need to do for the operationalside of our business.
Uh, I think it will allow us togain more trust in AI for to

(53:20):
support that side of a business,which I don't think has been
happening so far.
But yeah, just having thatcapability of performing deep
search, retrieve specializedcontent, accurate information,
and really focus on like a deepthinking mode that almost feels
like more human, you know,because it it is capable of
processing a complex issue intoyou know a set of tasks and a

(53:45):
final outcome, which is how ourbrain works, really.
Yeah, absolutely.
And then specifically for kindof the oils for response
industry?
I think the main interest thatwe've seen in the like so when
we've been out talking to ourmembers and just going to this
like industry engagement, thequestions that we always get are

(54:06):
okay, how are we going to bringthis into operations?
You know, it's all well andgood that you use it for
marketing or you use it forexercises.
How do we bring it intooperations though?
And I think that's how I see itgoing next, is as we're
building these capabilities forlike deep thinking, for
cross-functional agents, we willnow be able to use AI almost

(54:30):
like system integrators.
So we can integrate the systemsfrom ops directly into the
systems of support planning, andto comms, and to all the other
departments that support eachother.
I think we'll be able to learnmore from the existing data to
kind of support our learningsand our strategies in the

(54:52):
industry, and particularly foroperational terms, we can learn
more from like past incidents,from current operations, from
current plans.
We'll just be able to likequality data will be so
important, you know what I mean?
Like from an operational sideof things, instead of looking at

(55:14):
keeping data as just somethingthat you check the box that
you've done it, when youactually focus on creating that
quality data, then that is goingto be what's going to directly
inform your strategy.
That is like so powerful.
You know, you're basicallybringing the data from the
ground directly into the heartof how you strategize for the

(55:36):
future.
And sometimes that's been adisconnect because they feel so
far apart in the hierarchy ofhow corporations work.
So I think that's gonna be it'sgonna connect the two, it's
gonna bring them closer.
One thing I'm really excitedabout actually, and I've been
talking to some peopleinternally about this for like
months now.
But when we talk about agents,you know, when we're talking

(55:58):
about like this agent that, youknow, use you train it for a
specific task, and then, youknow, it can do it all the time
for you.
I can definitely see how we cantrain our own agents to create
a little oil spill responseadvisor in your pocket kind of
situation, you know, you trainan agent, an AI agent with like

(56:18):
all the oil spill responsetechniques and all the field
guides and all the regulations.
You can even train it with likelearnings from past incidents
or from preparedness work.
You can, you know, if you'relucky enough, you might even
have people in your organizationwith decades of experience who
are willing to speak with theagent to train it in certain

(56:41):
things.
And then suddenly, when you'reout on the field and you're like
not sure what to do in thatspecific situation, you just
don't have, you know, thosefield guys with you at that
moment to really advise you thebest, you can just ask your
little pocket advisor, heyfriend, what should I do in this
situation?
What do you recommend?

(57:03):
And obviously you shouldn'tblindly follow what the little
AI tool is telling you, but itcan actually like inform you in
certain things that you wouldn'tbe able to access that
information in another way, orif you would, it would take you
quite a long time to dig it out.
Yeah, could turn the TPR wheelinto a little agent.

(57:23):
Exactly.
You know, I I can see how thatlike for those more practical
terms, how we could definitelyalready come up with some stuff
that can be used right now.
But yeah, I don't know.
I think more long term, I thinkhaving that access to
information in a completelydifferent way, I just think it

(57:46):
will open people's worlds alittle bit.
Okay, so if we're working inorganizations and in teams or
people come from the samebackground and are have similar
skill sets, or maybe are livinginside the same bubble, it can
be really difficult to thinkoutside the box.
So I'm just thinking how, like,maybe AI is a tool that is

(58:07):
capable to retrieve informationand insights from anything you
can imagine all over the world,from any perspective that you
can imagine.
I'm just imagining we're likesome good brainstorming sessions
that you can have that maybecan give you ideas of things
that you didn't know, you didn'tthink about.
And will that inform the futurestrategy of like, you know, oil

(58:31):
and gas industry, of how weadapt to the energy transition?
I don't know.
So many things that can comefrom that open-mindedness and
using it to uh broaden ourhorizons, I think.

Emma Smillie (58:45):
So let's move on to some closing reflections.
So, what's one misconceptionabout AI and exercises or
marketing or in general,actually, in any sort of context
that you'd love to clear up?

Ines Costa (58:58):
I think the main one is like AI will not do your job
for you.
I think the the misconceptionbeing that people think AI is
gonna come and take away yourjob and do everything for you,
or that you, you know, you'regonna start to use Chat GPT and
suddenly you don't have to doyour job anymore, and that's it.
You can just get paid to sitwithout doing anything.
I think that's a bigmisconception.

(59:19):
It cannot do your job for you.
There is no magical button thatyou click and suddenly you say,
do this for me, and it justdoes it perfectly, and that's
it.
That doesn't exist.
I think what AI is not gonnatake your job away, it's just
gonna change aspects of it,right?
So maybe, you know, for myselfbefore, I'm just thinking when I

(59:42):
first started, I would spendhours and hours actually writing
the content of the social mediaposts or the emails that we
were sending.
Right now, I have a tool thatdoes those very quickly for me,
but then I spend the rest of thetime like researching and
strategizing and gettinginsights to.
Know what to ask the tool to dofor me, right?

(01:00:04):
I think at the end of the dayis just a tool, you're still the
human, you're still in chargeof it, right?
So the creativity, theknowledge, the value that you
put into it, like the rule withAI is garbage in, garbage out,
right?
So if the prompt that you'rewriting, the information that

(01:00:28):
you're putting into it, thethings that you're asking are
not, they're not good, they'renot, they don't have good
quality, what you're gonna getout of it is also bad.
So your job's not gonna bedone.
I think it's about that changeof mindset of being like, I need
to adapt to instead of beingsomeone who spends hours writing

(01:00:49):
this, now I'm someone who hasdeep insights of the market that
I'm operating in, someone whohas a clear vision of what I
want to achieve, and someone whoknows how to communicate with
this tool to then get the bestoutcome.
I think it's just a change inmindset is at the end of the
day, you're the human, you're incharge, the tool will do what

(01:01:12):
you make it do, basically.
So you decide what is worthusing it for.
Yeah.

Emma Smillie (01:01:19):
I think creativity, problem solving,
critical thinking are gonnabecome the skills that people
need going forward.
100% more than anything, andthen you can use AI to help what
max out your productivity, forexample, or um just save time.

Ines Costa (01:01:38):
100%.
I think we talked about thiseven the other day.
The two of us were saying,like, about the AI sandwich,
right?
And how like, you know, you arethe beginning point where you
have the clear vision whereyou're going, and then you use
AI in the middle to kind of doall those like boring bits in
the middle that really do havethe capacity to like that's

(01:02:00):
where you can work yourproductivity in.
And then you are again at theend of the process where you
will review what's been done,you will like make sure that it
lives up to the standards ofquality of what you want, and
you understand how to integrateit into the wider picture.
It's all gonna come down liketo your creativity and your
problem solving.

(01:02:21):
That I think our jobs are gonnastart paying us more, not more.
I don't think they're gonna payus more.
I think our jobs are gonnastart paying us to be more
high-level thinkers and moreproblem solvers than you know,
machines who do repetitivetasks, I think.

Emma Smillie (01:02:40):
So, what's the one thing?
If somebody's listened to thispodcast, what's the one thing
you would like them to take awayfrom this conversation?

Ines Costa (01:02:48):
I think don't be scared of it.
Uh, I think I can confidentlysay, just from the way things
are going right now, AI reallyisn't going anywhere.
It's kind of like it's adevelopment that it's been, you
know, we've been seeing it liketheorized and imagined in sci-fi

(01:03:08):
for so long.
It was just bound to happen andit was just bound to become a
thing.
So now it's here and it's notgoing to go anywhere.
So it is scary, it isoverwhelming.
I think even for myself, I themore I learn about it, the more
weary I am and the more aware Iam of what I'm using it for.

(01:03:29):
But there's no need to bescared of it because, like we
were saying at the end of theday, you still have the power to
make the most of it and to useit for what benefits you.
You know, it's within yourpower to use it for the things
that you know that you strugglewith, with the things that you

(01:03:50):
just don't like doing, for thethings that you think you could
maybe do better with a littlepush that's just not available
to you at the moment.
It's there to support you withthose things.
You just have to use it forthose things, you know.
Nothing is gonna take away yourintelligence, your creativity.
And like when I did a course onAI earlier this year, we

(01:04:11):
started a session with ourtrainer going online and using
AI to analyze the job ads thatwere out there on LinkedIn for
the last month in the UK formarketing roles, right?
And he analyzed the jobdescription for those job ads

(01:04:31):
and asked AI how many of thoseresponsibilities could actually
be replaced by AI contentcreation and automation.
And it gave it a number like49%.
It was something crazy.
Almost like half of themarketing jobs available in the
UK could potentially be replacedwith AI.

(01:04:54):
So when you hear something likethat, you're like, that is so
scary.
But at the end of the day, Ithink it's about that change in
mindset of it's only scary ifyou look at it like that.
But if you look at it like,okay, if AI is gonna do 49% of
my job for me, what else am Igonna bring to the table?
What are the really cool stuffthat I'm capable of doing that I

(01:05:15):
just don't have the time to doright now that I can now bring
to the table and just woweveryone and just be amazing.
So yeah, just don't be scaredof it and make the most of it,
and I think everything's gonnabe okay.
And experiment.
Always experiment.
Always, yeah.
Because it always just getsboring, you know.

Emma Smillie (01:05:33):
Well, thank you so much for joining me today.
I have a lot, I probably hadlike a hundred more questions I
could have asked you, but um, wecan always do a part two.

Ines Costa (01:05:42):
Oh, yes, we'll see if it's like the public will
demand for it.
Oh, we must know more.
That's great.
Okay, thanks, I thank you somuch.
It was awesome.

Emma Smillie (01:05:57):
Thank you for listening to the response force
multiplier from RSRL.
Please like and subscribewherever you get your podcasts.
And stay tuned for moreepisodes as we continue to
explore key issues in emergencyresponse and crisis management.
For more information, head torsrl.com.
See you soon.
Advertise With Us

Popular Podcasts

Las Culturistas with Matt Rogers and Bowen Yang

Las Culturistas with Matt Rogers and Bowen Yang

Ding dong! Join your culture consultants, Matt Rogers and Bowen Yang, on an unforgettable journey into the beating heart of CULTURE. Alongside sizzling special guests, they GET INTO the hottest pop-culture moments of the day and the formative cultural experiences that turned them into Culturistas. Produced by the Big Money Players Network and iHeartRadio.

Crime Junkie

Crime Junkie

Does hearing about a true crime case always leave you scouring the internet for the truth behind the story? Dive into your next mystery with Crime Junkie. Every Monday, join your host Ashley Flowers as she unravels all the details of infamous and underreported true crime cases with her best friend Brit Prawat. From cold cases to missing persons and heroes in our community who seek justice, Crime Junkie is your destination for theories and stories you won’t hear anywhere else. Whether you're a seasoned true crime enthusiast or new to the genre, you'll find yourself on the edge of your seat awaiting a new episode every Monday. If you can never get enough true crime... Congratulations, you’ve found your people. Follow to join a community of Crime Junkies! Crime Junkie is presented by audiochuck Media Company.

Stuff You Should Know

Stuff You Should Know

If you've ever wanted to know about champagne, satanism, the Stonewall Uprising, chaos theory, LSD, El Nino, true crime and Rosa Parks, then look no further. Josh and Chuck have you covered.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.