All Episodes

July 14, 2023 41 mins

Send us a text

In this thought-provoking episode, we delve into the fascinating world of artificial intelligence and its impact on emergency management. Join us as Liam Harrington-Missin, Head of Data Technology and Innovation, sheds light on the profound changes brought about by AI technology. From the evolution of AI's role in handling tasks, such as oil spill detection through satellite imagery, to the game-changing introduction of large language models like Chat GPT into consumer domains, Liam offers insights into the rapidly changing landscape of AI adoption. We explore the implications of AI on emergency response exercises, misinformation management, and even the potential transformation of traditional responder roles. Join us on this journey to understand how AI is reshaping the future of emergency management and what it means for organizations and responders alike.

Please give us ★★★★★, leave a review, and tell your friends about us as each share and like makes a difference.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:03):
Hello and welcome to the Response Force Multiplier, a
podcast that explores emergencyplanning and response.
On the Response ForceMultiplier, we bring together
compelling experts and thoughtleaders to provide a fresh take
on key issues and cutting edgetechniques in this field.
In each episode we'll dive intoone aspect and we'll use OSRL's

(00:26):
unique pool of experts andcollaborators to distill that
down into actual tools andtechniques for better
preparedness and response toincidents and emergencies.
My name is Emma Smiley, we areAll Spirit Response and this is
the Response Force Multiplier.
In today's episode, we discussthe most seminal disruptive

(00:48):
technology to come along inyears artificial intelligence.
Ai is, of course, at theforefront of modern global
conversation, as people wonderif this technology is going to
have the impact some predict, inways both positive and negative
, and, more specifically foremergency response, how will AI
affect our response planning andhow should we approach and view

(01:09):
AI as this disruptivetechnology develops?
So today we explore what kindof disruption this will bring.
Will it completely change theindustry and cause organisations
to rethink their entirestructure?
Will it take everyone's jobs,or will it simply be just an
extremely powerful tool thatoptimises work and brings
efficiencies that we never couldhave imagined?

(01:29):
To discuss this we speak withLiam Harrington-Misson, head of
Data Technology Innovation atOilspill Response.
Liam discusses how he views AIin the broader context of
emergency planning, where hesees the dangers and the
benefits of using AI in responseplanning and how organisations
can position themselves to makebest use of this emerging
technology Right.
So, hi, liam, thanks forjoining us.

(01:50):
Great to have a conversationwith you about AI.
So can I just start off byasking you to briefly describe
your background and your role inAll-Star Response.

Speaker 2 (01:58):
Yeah, of course.
So I studied oceanography atSouthampton University about 20
years ago now.
My interest really was thephysical application of
oceanography.
At Southampton University,about 20 years ago now, my
interest really was the physicalapplication of oceanography.
So not so much biology but morehow do we measure it?
Why do we measure it?
What problems do we solve?
Very much, how technology canhelp across the industry.
Really, because technology wasstarting to really spin up,

(02:21):
Tools started to become a loteasier to use.
So it wasn't that we couldn't doit before, but it was just not
practical for us to learn andspend all that time becoming
experts in the tools.
But the ability to adopt toolsis becoming, and continues to
become far, far easier.
You can do a lot more complexthings very, very quickly, and
so I shifted about three yearsago now to the executive team,

(02:41):
to the business as a whole andto the oil and gas industry or
the energy industry, as they'reknown now and looking at the
application of technology toaddress oil spill problems and
big shift stuff.
So my current role is the headof data technology and
innovation, which is anincredible catch-all statement.
I get messages from everysector possible because I mean,

(03:05):
technology is everything.
Data is everything andeverywhere, so anything from
cybersecurity to new types ofmaterial for collecting boom
kind of drop onto my desk.
So, whilst my job title is allcapturing very much, my focus is
on the application of digitaltools to oil spill response the
software side of things.

Speaker 1 (03:31):
So, moving on to AI, which is kind of where our
conversation came from, couldyou talk a little more about AI
in general, kind of in simpleterms, talk about what it is and
why everyone is talking aboutit at the moment.

Speaker 2 (03:41):
So if you take the simplest tools like booms and
skimmers and collecting oil likethat, the technology itself,
there's small efficiencies, hereand there there's new slight
variations that capture certainthings, but that technology
space tends to be relativelystatic, it's plateaued.
Deepwater Horizon, I suppose,was a big trigger for a big
technology shift in oil spillresponse.

(04:02):
The introduction of the wellcaps, the advancement in the
aircraft capabilities with the727 aircraft these were really
big, major technology projectsthat have come across our desk
and have completely transformedthe organization.
Then COVID happened really andthat was a big catalyst for the
second kind of big technologyshift that I've seen at OSRL

(04:26):
we're talking over the internetnow, where Microsoft Teams
previously was kind of a thingthat was happened and Skype was
around but people kind of onlyused it.
Then COVID happened and ittransformed everybody's working
life and the big shift there hasbeen suddenly this bigger drive

(04:48):
and adoption and evolution oftechnology to help us work
smarter with data and tocommunicate over big differences
and, yeah, change the nature ofwork and that in turn has led
to some really big technologyshifts and mindset shifts in the
application of data for oilspill response, for emergency
management and for just work ingeneral.

Speaker 1 (05:06):
And how have you seen that evolve and change in your
time that you've been at oilspill response and what are the
trends that you're witnessing?

Speaker 2 (05:13):
AI isn't anything new .
I mean, it's been around for avery long time.
It's really just aboutcomputers replicating human
tasks as effectively or moreeffectively than before.
So, whereas typing a formulainto Excel isn't artificial
intelligence, predictive text onyour phone, for example, is
starting to hit the artificialintelligence side of things.

(05:36):
It's smart and I think we'reseeing these pieces of
artificial intelligence pop upall over the place and they have
been around for a very longtime.
The front-end emergencymanagement side of things,
artificial intelligence.
Before the beginning of thisyear, people probably might have
thought about the way oil spilldetection works in satellites.
So satellite goes over,captures a picture of the ocean

(05:59):
and artificial intelligence isable to create an outline that
says this area here is likely tobe an outline that says this
area here is likely to be an oilspill.
This area outside of this areaisn't an oil spill.
That's kind of machine learning.
On imagery.
At the beginning of the year,and the reason why it really
went viral and it did go viralwas this introduction of this
large language model, chat, gpt,which really took it to the

(06:23):
consumer rather than to thesespecialized applications like
imagery analysis and noisecancellations and things like
that and I've been toconferences this year and
everywhere you go you see aibeing thrown up on banners, chat
gpt being thrown up on banners,the application of ai to your
work streams being thrown around.
Microsoft have bought into theoriginal ChatGPT and we're

(06:45):
seeing those AI things come outinto the Microsoft projects and
really what shifted it was?
It's now so easy to engage withartificial intelligence.
I mean ChatGPT.
If people haven't been on it,it's just like sending a text
message to someone and thatsomeone just happens to be the
smartest or most intelligentperson on the particular topic

(07:06):
that you tell them or they canbe really creative, or they can
do this, or they can do that.
You can get them to write poemsfor you on a particular topic.
It's really fascinating whenyou try and break it and stress,
test it how well it performsall the way through.
Up until that point, everyonewas like google is your source
of truth, right?
Google is a verb that you do tofind information, whereas,
whereas this takes it a stepforward, and rather than search

(07:29):
for something on Google and theninfer that knowledge into the
answer to your particularproblem, you use ChatGPT to skip
all of that and it will justtell you the answer to your
particular problem in a way thatyou can really comprehend, so
it's a very exciting tool.
It is, of course, prone tochallenges and as I've watched
chat GPT evolve, I've seen thosethings start to come through as

(07:52):
they react to the socialconcerns that AI brings.
But it certainly doesn't looklike it's going anywhere and I
think it's going to acceleratequite quickly and impact every
business in all sorts ofcreative ways.

Speaker 1 (08:04):
Absolutely, but you've been doing lots of
experiments with AI, haven't you?

Speaker 2 (08:09):
Yeah, across the board, from educating me on the
different types of paprikaaround the world all the way
through to how we can use it onoil spills.
My involvement in oil spillresponse exercises is quite
large.
Oil spill modelling injects arealways one of those core
components of an exercise interms of creating a scenario
which people can get behind andunderstand and start evolving

(08:32):
the thinking and get to theoutcomes.
But there's always more roomfor realism.
Exercising is always verydifficult to make realistic and
it's the ability to react withrealistic information that
really started sparking myinterest creating content so
that a decision was made duringthe exercise and going, this

(08:54):
decision has been made.
Now create a hundred twitterfeed posts that react to that
public announcement so that youstart feeding back that
information.
I can tell it to have a certainsentiment, so the public is
cross or the public is excited.
It's really powerful to createthose very reactive content
injects which add a level ofrealism and add to that pressure

(09:16):
which is also really hard toreplicate.

Speaker 1 (09:19):
So you've used it to help create a scenario and adapt
the scenario versus whatdecisions are made, and it's
understood what you're asking it.
Have you had any issues with it?
Not misinterpreting anything,so yeah.

Speaker 2 (09:33):
So you teach it effectively as you go along.
So you start very simply andsay I want an
order-spill-response scenarioand it will create its best
guess at an order-spill-responsescenario.
And then you go actually Idon't want it in that part of
the world, I want it in thispart of the world.
And you can slowly evolve it tothe point where you're creating
scripts and step-by-step guidesabout how you react to an oil
spill.
I queried it the other dayabout the uk national

(09:54):
contingency plan and they gave apretty good reaction in terms
of I've just had a spill, we'rein the first hour of an incident
, I want to adhere to thenational contingency plan,
create a checklist for me tofollow for the first few hours
of a spill and it gives you abig long list and you can see
how it can be used toinvestigate that side of things.

(10:15):
The challenges and I'm startingto see work on this is if you
keep taking a scenario on and onand on and on, it starts
learning from itself in itsearlier answers, so it starts
forgetting the humanintroduction.
And that's one of the big fearsthat's starting to evolve is
that as content is more easilycreated by ai, it will start to

(10:35):
hit the web and so ai islearning from this content.
So instead of ai learning fromhuman created content, it starts
learning from AI createdcontent, and we're not quite at
a place yet where it can keepthat going.
So you start creating problemsin its answers, it starts lying
to you or it starts creatingfictional answers, and I mean

(10:56):
you really have to stress it tocreate these fictional answers.
In the early days there wasvarious articles about where
people have beaten, and it'salways a challenge to try and
beat chat gpt by say one plusone equals three, and it was
says no, it's two.
And then you go no, I'm sayingit's two because my wife tells
me that it's two and she'salways right.
And chat gpt apologizes andgoes yes, one plus one equals

(11:17):
three.
Now if you try and do the sameexercise, it very much pushes
back and says no, themathematics behind this is
fundamental One plus one is two.
I'm sorry about your wife, butit is exactly what is going to
happen Early on.
I could tell it to create somevery negative sounding injects
and quite aggressive ones if Ireally wanted to push it and say
really lay into this instantcommand and making feel bad as a

(11:39):
result of this instance, someof the really harsh stuff that
you could experience in socialmedia, and it would give me that
.
But more recently it's softenedits approach and said I can't
go that far.
I'm not going to allow you tocreate that level of negative
content yeah, I mean, it'sreally interesting how it's
evolved, isn't it?

Speaker 1 (11:56):
because I can remember using it towards the
beginning, when it sort of wentviral, and asking it to write
with empathy, and it replied tome I am a robot, I can't write
for the motion.
More recently I've tried thatagain and it does.
It adds more empathy to itsconversation.
It was so.
It is evolving and it is kindof learning all the time.
I find the prompts are key,aren't they?

Speaker 2 (12:18):
you have to ask in a certain way, else it will go off
on a tangent definitely yeah,and one of the big things really
is understanding how to trainit, because whilst it seems like
you're chatting to someone,there is good ways and bad ways
of interacting with chat gpt toget the desired outcome.
And training it and telling itwho it is are you an oil spill
response expert?
Are you a music producer?

(12:40):
Are you a content creationexpert?
Acts in this way provide thevoice in that way is really
important up front.
You don't have to train it somuch on kind of common knowledge
stuff.
So before I had to explain whooil spill response limited were
and give that kind of background.
Now it's been updated so thatit understands who osrl is.

(13:00):
So I can just say I am the dataand tech lead of osrl and it's
got all the backgroundinformation to be able to
interact with me.
But yeah, the skills and thegood practice guides and all the
various white papers that areflying around the internet in
terms of how to use chat GPT togive you the outcomes you want
is worth learning, just so thatyou don't go down that tangent.

Speaker 1 (13:21):
I guess creativity plays quite a big role in how
you leverage any of the AItechnologies effectively.

Speaker 2 (13:27):
Yeah, as people look at the direction of travel of AI
, what we're seeing is thatcomputers and machines are able
to do human jobs far better thanhumans can.
They can work faster, they'remore active, and now we're very
much in the trajectory of beingmore standoffish and saying
actually, this is the outcome.
I want you figure out how to doit and it will come up with a

(13:49):
pretty viable solution, becauseI do believe that AI can deliver
an awful lot of value to theworld, but at the same time, we
don't want people just sat inchairs wandering around doing
nothing.
You want society to go forward.
So understanding what educationneeds to move towards is really
interesting.
And as someone that employspeople, what skills do I need to

(14:10):
employ people to make usresilient as an organization?

Speaker 1 (14:14):
Yeah, it's definitely a conversation that's come up
in the marketing and comms worldfor sure.
Everywhere I see it peoplesaying, to be honest, even I was

(14:35):
in the playground picking up mydaughter from school and
somebody was talking about whywould I use ChatGPT?
She was obviously in marketing.
I'm not going to take all thosetime consuming things like that
and do at least a draft thatyou can then adapt and publish
out and the strategy side andthe thinking side, and it's not
going to do that for you.
And from a crisiscommunications element, it can
write a great statement at themoment it might do given the

(14:57):
changes I've seen, but itdoesn't add the human, the
concern, the feeling and thesense check that you need to do
affect your crisis comms.
So it is interesting.

Speaker 2 (15:08):
The softer skills, which teamwork, creativity,
dynamic thinking those kind ofskills are very much using all
the tools available.
Those are the kind of skillsthat we start looking for, the
passion behind work.
Now, an observation that Ishare with teams all the time is
that if you're doing the samething more than once, then a
computer can do it, because allyou're doing is a repetitive
task and very simple tools nowwill just replicate that simple

(15:32):
task.
And we do it all the time.
Right, we fill out forms on anincident and things like that.
All of that stuff is humansslowing down a response.
They're not putting theirbrainpower to solving the larger
problem.
Problem solving is a huge skillthat isn't going to go away, but
we tend to focus more onteaching how, the tools to solve

(15:52):
problems, rather than theskills to understand and
creatively solve problems.
I think it will impact allwakes of life very dramatically
and, in response, could verymuch replace a large majority of
the tasks that humans do.
It has the capability to dothat.
The big question is whetherhumans are going to trust it

(16:14):
enough to allow it to do that.
You could have a far moreeffective response if you just
handed over the reins to an AIand said you solve this spill
and within seconds you'd haveall the paperwork completed and
it will be fired off and vesselswill be heading out to the
right location.
It will be optimized based ontrillions of scenarios that it's
analyzed.
As soon as you introduce a humandecision into that, you slow it

(16:36):
down considerably, but you'veintroduced a human decision into
that.
You slow it down considerably,but you've introduced a human
decision into it.
Which everyone feels betterabout.
And that's where it isinteresting to see how we will
evolve as a society is how muchwe trust ai to deliver it.
It's no longer about whetherit's feasible, it's whether
we're going to let it are youseeing sort of artificial?

(17:10):
intelligence being integratedinto emergency preparedness
response at the moment, or isthat a future trend?
Again, for kind of verysupervised examples, I wouldn't
be surprised if chat, gpt orequivalent was being used to
create content during exercisesto add realism here and there.
But it's not running theexercise, it's just a tool that
content creator like yourself orsomeone like me that's creating

(17:32):
data inputs into it to addrealism would use very much on
the sidelines.
I'm interested to see what aoil spill response scenario or
an exercise could look like ifdriven and owned by an AI and
supported by humans if you swapthe dynamics around.
I don't know enough to knowwhether it's capable of doing

(17:52):
that right now, but I'd bereally interested to know how it
would work.
But again, with the backing thatit's got and the trajectory
that it's going in, I can't seeit not play a bigger and bigger
part and I can't see it being adifferentiator of an
organization that gets in earlyto go.
Actually, what takes you oneperson an entire day to do is
now just automatically handledin the background.

(18:13):
All the approvals are takencare of in seconds.
Dispersant pre-approval happensjust very automatically by an
AI in a government talking to anAI in a requesting party, those
permissions happen straightaway.
Again, we're back around tothat question about how far do
we trust AI Because it will makeyou more efficient, but do you

(18:34):
trust it to be effective?

Speaker 1 (18:37):
Yeah, trust is a big thing there, but it's changing
so rapidly.
I mean, how do you keep up todate with everything that's
going on and what advice can yougive to others to help them
keep up to date?

Speaker 2 (18:48):
You're always on the back foot with technology these
days.
You have to accept that you'renever up to date.
There's millions and billionsof people doing incredible
things day to day.
You keep an eye on some of theofficial directions.
I keep an eye on some of theofficial directions.
I keep an eye on what ChatGPTis doing and what the release
notes and what the roadmap is ofthe big forefront hitters.
Because it's now emergent.
Right November, december time,linkedin started going crazy

(19:10):
about ChatGPT, but before that Ihad no idea about large
language models and then it hitsand gets rolled out very
quickly.
We discussed this kind of atthe beginning of the year about
chat.
Gpt was massive back then, butin my space the hype's
disappeared now and I seearticles here and there.
But because Apple's releasedthe new immersive technology
headset and that's now the newbig thing and that's the new

(19:33):
thing coming through.
Now, keeping abreast oftechnology, you just have to
accept that you're not going to.
The thing that's easier to keepon top of is the problems that
your organization or emergencymanagement is having.
That's not evolving as quicklyas technology.
So when you're finding theproblem and you've got a pretty
basic understanding of all thedifferent technology spaces,

(19:54):
then you can start creatinginnovation, just by marrying up
the problem with the newtechnology.
What wasn't viable six monthsago may be viable now.
So, yeah, accept that you'renever going to be on top of it
all and the ideas could come infrom anywhere in the
organization or in the industry,and keep an open mind.
Really, it's really easy to getexcited about an idea and then

(20:14):
the next idea comes along.
And the next idea comes along.
I mean, great ideas are reallyeasy to find these days because
technology enables us really,because it's there with
information everywhere.
It's turning that idea intosomething tangible, which is
where the real challenge is.
It's executing ideas, it'sdelivering on ideas, and then
you've got no choice but toreally best guess which

(20:37):
technologies are the most viableand when to jump on the
bandwagon.

Speaker 1 (20:41):
Yeah, which just leads me very nicely to my next
question.
I mean, one of the things thatgoes around my LinkedIn feed is
the because a lot of it's aroundcrisis comms is fake news and
the ability just to generate awhole heap of information that
isn't even correct.
So I mean, I guess there's thechance that the use of AI could

(21:03):
actually confuse the situation,and how do you tell the real
from the fake?

Speaker 2 (21:08):
It's not a problem that I can say well, this is the
solution.
You just fire AI at it and AIwill tell you what's true and
what's not true, because itwon't.
This is one of the big shiftsthat someone somewhere is going
to have to figure out.
Is it Twitter that startedhandling fake news by having
verified accounts and havingexternal people verify
information?

(21:28):
You can't stop misinformationbeing published and potentially
even stop it from traction andbecoming viral.
You have to educate people onhow best to verify the
information they see in front ofthem.
I see it all the time ondifferent social media platforms
.
It's so simple for me to actlike an authority in anything.
Now I can get AI to createvarious articles that make it

(21:52):
seem like it's true.
Verifying information is one ofthose steps that we're going to
have to learn how to do a lotbetter, and it's really
interesting because it willcontinue to slow down a response
or create havoc during aresponse, if done really well,
or really lead people to have todo work they shouldn't have to
do.
I can give an example is where,going way back before large

(22:15):
language stuff, there was anincident and there was oil on
the land near the marineenvironment.
But in the marine environmentthere was a biological residue
on the surface, nothing to dowith the oil spill, but it was a
biological residue like seaweed.
Satellites picked it up andsatellites can't detect oil.
Satellites can detect thesignature that could be oil and

(22:35):
it can say likely or unlikely tobe oil, but it won't say this
is definitively oil.
But if you see that residue ona satellite image, you can very
quickly put out a message sayinglook, oil's in the water.
Suddenly, even though we knowit's not, we're having to react
to that public engagement, thatobservation of oil spills in the
water, and it's very hard tobacktrack once that image is out

(22:59):
there, because maps are verysubductive in terms of content.
People love a map.
A picture showing clearly whatlooks like an oil spill on the
water is really hard to disproveand explain what it is,
especially to a public who arecontrolled by not necessarily
the information coming out ofthe comms office of the incident
, but the downstream mediaoutlets that are potentially

(23:21):
adopting the stories that sellright rather than the stories
that are necessarily factual.
That's just one of the thingsthat you need to exercise and
understand, which, again, Idon't know whether that's done.
I certainly haven't seen thatlevel of focus on an exercise in
terms of handlingmisinformation during an
incident, because it couldcompletely derail it.

Speaker 1 (23:38):
Absolutely.
I guess this.
We're starting to talk aboutthe risks and there are risks,
aren't there in relying heavilyon AI in emergency preparedness
and response.
So we've talked aboutmisinformation.
What about the too much data?

Speaker 2 (23:54):
So you take another piece of technology, the
internet.
We rely on the internet.
If we didn't have the internetduring a response, how would we
respond?
And that really breaks a lot ofpeople's heads because it's so
fundamental to everything nowthat it is critical that we have
the internet and we havevarious resiliences to try and
make sure that we have it.
I can see AI being the samesituation in terms of in five,

(24:16):
10 years time.
We won't be able to respond toan incident without AI or
without all sorts of emergenttechnologies.
Now it will naturally becomemore resilient, it will become
more trustworthy, it will becomemore regulated, it will be
better understood, in the sameway as the internet is.
I mean, you can break thingswith the internet.
You can if you want to findmisinformation, if you actively

(24:40):
search for it or you go to thewrong places.
Managing that risk.
Part of it is you just have towatch what's happening in that
regulation space.
Don't use a dodgy ai toolbecause it's cheap.
Use a regulated ai tool thatconforms to various regulations
and stuff like that well, no, goto data.

Speaker 1 (24:56):
Well, the only question I was going to have I
don't know whether this one flowwas how secure are they?
Because that's something thatcame up in one of our crisis
exercises where I was using chatgp to help me quickly um, with
one of our own exercises,exercising our own crisis
management team, and I was usingchat gpt to quickly give me an
outline that I would then add to.
But how secure are they?

Speaker 2 (25:16):
it is really easy to share very sensitive information
and it is very appealing toshare very sensitive information
with chat and QPT, creating anew strategy for your
organization, for example.
It's a good person to bounceyour ideas off.
It can be that it can be areally good advisor on the
various strategies.
But of course, you're sharingthe highest level of intel about

(25:37):
your organization with aplatform and the concerns about
security are valid.
You shouldn't share sensitiveinformation with an open
platform unless you are reallyconfident about its security.
Putting sensitive informationonto the internet is always a
security risk.
You can ask chat GPT about howsecure it is and it will claim

(26:00):
that it doesn't shareinformation outside of the chat
and it's secure to you.
So it just adds the informationto that particular model and
then gets rid of it.
But I've got a chat history now, so it is being saved on
someone else's data platform,encrypted or otherwise.
It should always be in people'sminds about what they share on
any platform, regardless of chat, gpt or anything like that,

(26:21):
because internet andapplications in your
organizations will evolve veryquickly and you don't
necessarily know which ones youcan share information with
Another big challenge that mostorganizations face so, going
back to the data, you were goingto explore that topic of too
much data.
The amount of data that's beinggenerated in a spill or an
incident, any kind of marinepollution event is dramatically

(26:41):
increasing over the last fewyears.
Really, you've seen the impactwithin OSRL over the last 18
months or so on the differentincidents we've responded to.
The reason why the data isexpanding so quickly is how easy
it is to get imagery ofincidents.
I mean, you can take a picturewith your phone and it can
arrive.
But drone technology,remote-piled vehicles and things

(27:03):
like that these things arereally cheap to get now and
they're at a level where theyreally add value to an instance.
Having that eye in the skypicture is now cheap and easy to
deploy, but these things arehuge data files compared to your
standard text file.
We've got a huge challenge withimagery within OSRL, not just
within instance but across theorganization.
Imagery files, video files theytake up vast amounts of space

(27:26):
and they're also really hard tosearch and explore.
A video file, you just get afile name and then you have to
watch the video to understand it, whereas many files you can
search within it.
They're structured in a waythat you can easily explore,
whereas images, for instance, wecan collect 50,000 phone camera
images and unless they've beenproperly tagged and catalogued,
then they're just 50,000 filesthat you have to manually go

(27:48):
through to explore, unless,potentially, you deploy AI to
scan them and extract the keybits of information, which is
another great use of AI is imagecategorization.
So where I'm coming around tois that ai can be a real tool to
take all this vast amounts ofinformation that comes in on a
day-to-day basis and turn itinto stuff to tell people where

(28:10):
to focus their attention.
And that's one of the bigrevelations about the whole
visualization space.
In incident I've seen instantcommand which is just blank
walls and you stick pieces ofpaper up on the wall as things
come in, especially on veryearly phase of an incident or
someone that doesn't have a fullcommand center set up.
You're just trying to get asmuch information up on the wall

(28:30):
so the humans in the room canassimilate it.
But it's all about that protest, right?
You go from data to information, to insights, to decision.
That's the workflow that we'realways working for.
We're always looking for thatgreat decision.
More and more data will helpinform you to make that great
decision, but if you getoverwhelmed by it, it's very
easy to go.
I haven't got the braincapacity to simulate this data

(28:52):
to make a great insight.
Insight where I can see aicoming in is, again, it's trying
to shortcut that bottleneckwhich is the human decision
making side of things.
We can no longer expect anyoneto be able to look at every
piece of information coming inthe door for an instant.

(29:12):
Stuff will get missed if yourely on people doing it, so you
have to rely on technology tohelp synthesize that data to
make it easier for humans toingest it so that they can make
the decisions, because we'restill relying on humans to make
that decision.
That's not going away anytimesoon, despite the capabilities
of AI.
What you do with AI is use it asa tool, which is what it is to

(29:36):
take that information and trainit to say this is the stuff that
we're really interested in,highlight this on the big key
notes board or something likethat, and train it that way.
And that I think is well withinour grasp in the next few years
is being able to use AI tosupplement the tools we have to
synthesize data so that theright bits of information at any

(29:58):
moment in time, can be put upon the screen or can be queried
from the database and we'reseeing that some of the exciting
stuff that's coming throughfrom Excel is this ability to
ask Excel questions on aspreadsheet and it will create
your graphs and your statisticsbased on the question you've
asked it, rather than having toask a business analyst to

(30:19):
produce a series of graphs thatmay or may not answer your
question and then you have to goback and things like that.
So AI and its ability tosynthesize large amounts of data
, I think is one of thosenear-term wins that we're going
to see that people are going tofeel more comfortable with,
because it's not the decisionthat's being artificial, it's
the insight before the decisionthat's being artificial.

(30:39):
It's the insight before thedecision that's being artificial
, and we can feel morecomfortable about that and I can
see that being the nextsensible step that does seem
where the real value would be,certainly in our world of all
spill response.

Speaker 1 (30:51):
I guess my next question was around was around
the more human, human element,the actual sort of the physical
response, things like wildliferesponse, that sort of thing.
I'm not sure I can see a worldwhere AI would take over that.
What are your thoughts?

Speaker 2 (31:09):
One of the really interesting and the innovative
things is that all thesetechnologies is coming together
simultaneously so that there'sartificial intelligence, which
is one of the top emergingtechnologies.
That's on my radar.
Autonomous vehicles is anotherone.
Immersive technology, soheadsets and things like that,
geospatial tools that I'vementioned before, things like
common operating pictures andthings like that those four are

(31:32):
big topics.
It's unrealistic to expectthere isn't feedback loops
across all four and many others.
So AI in autonomous vehicles isone of those things where you
can start to see how it wouldstart to replace the human side
of things, human world.

(31:59):
We can either do it with peopleso boots on the ground, things
like that or more and more we'reseeing robots take over robots
in warehouses, robots in surveysand high-risk areas.
You know when you come tothings like deploying booms and
skimmers.
Well, we're already seeing someof the startups of people going
.
Well, here's your autonomoussurface vehicle which has a boom
attached to it.
Here's your robot that goesdown a beach and collects all

(32:20):
the pieces of plasticautomatically, because it uses
camera feeds to identify what'splastic and picks it up and does
that with things like wildlifecleanup.
It is difficult because it issuch a thinking of it from a
problem point of view.
You don't want to hurt theanimal by cleaning it right?
That's why we rely on people isthat people are gentler, they
can understand touch and thingslike that.

(32:42):
Again, if you look at the massproduction side of things when
people eat food, for example,there's an awful lot of
autonomous vehicles that canvery carefully handle things
like salmon and things like that, so you don't bruise the skin,
you don't bruise the skin, youdon't bruise the meat.
If you can do it with somethingthat's dead, then it's not hard
to go.
Well, actually, when it's alive, in my struggle it's not

(33:05):
impossible to go.
Potentially we could have arobot that automatically cleans
up various types of birds down aconveyor belt.
I mean, I'm getting silly, butyou can see how it's possible.
It it's perfectly possible now.
We could have a completelyautonomous response now if the
right existing technology wasput in place.
But it's not viable.

(33:26):
I mean it would be superexpensive and fraught with lots
of problems and you'd get lotsof mistakes.
Whether it's going to be in thatstate in 10 years' time, that's
a different story.
It's potentially very muchgoing to be in place.
I suppose.
Just a closing thought is oneof our big drivers is to get
responders out of harm's way.
We don't want responders inplaces where they could
potentially get harmed, and oneof the big high risks areas is

(33:49):
putting people on boats andgetting them to deploy very
heavy, large pieces of equipmentin very dynamic waters, and so
the driver there to getresponders off the water and
controlling an artificialintelligence that can clean up
the oil as effectively or betteris incredibly appealing to
everyone.
Why would you not want to dothat?
And we're seeing it with thingslike drones and autonomous

(34:11):
vehicles and autonomousshippings coming and things like
that.
So I think the chances are highthat in the next 10 years we
are going to start seeing thatshift away from what we think is
naturally just the skills thatwe need to teach our responders
is actually not necessarily theskills that we're going to be
teaching them.

Speaker 1 (34:28):
In 10 years time it's going to be how to control
autonomous vehicles to do suchtasks yeah, and then I guess our
sort of the responder rolebecomes more of the instant
management, the overalloversight, the control of the
autonomous vehicles.
Yeah, it's interesting, isn'tit a whole different skill set,
or developing skills that arealready in existence, but really

(34:49):
taking, taking them furtherit's taking us away from the
physical acts and going moretowards the creative thinking.

Speaker 2 (34:56):
It's like we have a finite number of autonomous
vehicles that we can deploy toclean up the oil spill.
What's the best strategy overthe next five days?
And that's where you can seethe human interaction bouncing
off an ai or somethingunexpected happening and you
need to remotely pilot a vehicleor things like that.
Those kind of skills I can seegrowing over the next 10 years,

(35:18):
which is really for existingresponders I can see being very
threatening.
A lot of the skills and thetalent and the experience is
absolutely essential today.
How long are those skills goingto be valued when actually the
outcome that we want a cleanerenvironment can be delivered
with far more autonomoussolutions than is current today?

Speaker 1 (35:40):
so the responder of the future is a really
interesting topic so that thisis what uh liam's tomorrow's
world of emergency responselooks like, then, is it?

Speaker 2 (35:49):
I see all this technology and it's out there
and some of it's used widely,but it's so expensive compared
to the standard model that osrland response agencies use that I
can't just let's buy five spotdogs to replace 10 workforce,
because that doesn't work.
But when do you start investingin this technology?
Drones is.
The big one for me at themoment is that I'm trying to

(36:09):
push for more in-housecapability with autonomous drone
things rather than relying on athird-party provider, and
there's pros and cons to bothand the argument's worth having
over and over again.
It's just knowing when to jumpon and how to do it and when to
take the risk.
And it's not that there is aright answer or a wrong answer,
because we can't predict thetechnology space in three months
, let alone in three years.

(36:30):
It's just knowing, when youmake that decision, that there
is a risk associated with it,and sometimes you're going to
get that risk right andsometimes you're going to be on
too early or too late.
And the big one really is aboutlearning the lessons that
adopting new technology brings,not necessarily the new
technology itself.
How can we be more effective atadopting new technology is

(36:54):
hugely important to anorganization now, and being
agile and being able to get onquick and spending years
debating a particular technologybut just going actually try,
fail, learn, try, fail, learn,try, fail, learn.
Okay, let's park this for nowand come back to it in a year's
time.
Let's try this and that, whichis very hard for a typical

(37:15):
organization to get their headaround when they're used to very
much a far longer timeline withbig business cases and risk
analysis and things like that.
Technology just doesn't giveyou that assurance anymore.
You can't adopt technology withthe mindset that it is proven.
You can't.
It will be disproved, it willcome out of date really quickly
and there's big wins to be hadand there's big losses to be had

(37:36):
and that's terrifying fordecision makers to try and adopt
.

Speaker 1 (37:39):
Yeah, we've had the phrase fail fast in digital
marketing for quite a number ofyears now, but it's less spend
and it's less risky in mostinstances.
So, yeah, technologies are hotand all the things you talked
about are a whole new level.
I mean, I think we've covered alot.
Are there any other reflectionsyou'd like to share?
Any further thoughts or adviceyou would give people or

(38:03):
organizations looking at AI andtechnology?

Speaker 2 (38:09):
There's always going to be a new shiny tool around
the corner.
There's always going to be newtechnology.
There's always going to be waysin which you can do what you're
doing better, whether it'sentirely automated or whether
it's human, supervised orwhether it's actually.
We decide that it's entirelyhuman, but you need to be
conscious that you're makingthose decisions.
You're making thosedistinctions.
Big challenge that everyone'sgot now is what you don't know

(38:34):
is huge compared to what you doknow, and you have to be really,
you have to start gettingcomfortable with being
uncomfortable about how littleyou know about a situation and
the solutions that are coming.
You have to look less at a bigtechnology adoption project
which takes many years and startbreaking it down.

(38:56):
I mentioned Agile earlier on.
This is a framework that's beenaround a lot with different
industries for a while buthasn't really reached it.
We're still very much in a planto deliver.
That's it type mentality,whereas actually iterative
delivery reduces a lot of risk.
But you have to make that firststep and you have to accept,
going into a project or into atechnology adoption, that it may

(39:21):
fail and in fact the likelihoodis that it will fail for at
least the first five to teniterations.
So adopting ai, adoptingimmersive tech like the apple
Vision Pro headset, adoptingdifferent surveillance
technologies all of these thingshave to stop being big projects

(39:45):
.
They have to start being justsmall, iterative things that we
try and learn and deliver andmove on.
I think we have to learn how tolearn things faster, try things
faster and evolve faster.
Business transformationshouldn't be a project that has
an end date.
It should just be what you doon a day-to-day basis.
You are constantly evolving,and anything that's getting in

(40:09):
the way of that constantevolution the need to sign off
on business developmentproposals, business cases,
process authorizations andthings like that is potentially
hampering the effectiveness ofyour organization.

Speaker 3 (40:32):
Thank you for listening to the Response Force
Multiplier from OSRL.
Please like and subscribewherever you get your podcasts
and stay tuned for more episodesas we continue to explore key
issues in emergency response andcrisis management.
Next time on the Response ForceMultiplayer.

Speaker 4 (40:48):
No stage of development is bad, no operating
system is bad, but it can belimiting.
If we cast our minds back tothe very first smartphone, you
look at what we could do on thatdevice with the operating
system it had, versus what wecan do on our current devices
with the operating system thatit has now.
It's radically different.

(41:09):
We can do more, we can processmore from a technology
perspective.
So I think about our own inneroperating systems as human
beings as similar to that.
For more information, head toosrlcom.
We'll see you soon.
Advertise With Us

Popular Podcasts

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Special Summer Offer: Exclusively on Apple Podcasts, try our Dateline Premium subscription completely free for one month! With Dateline Premium, you get every episode ad-free plus exclusive bonus content.

The Breakfast Club

The Breakfast Club

The World's Most Dangerous Morning Show, The Breakfast Club, With DJ Envy, Jess Hilarious, And Charlamagne Tha God!

Crime Junkie

Crime Junkie

Does hearing about a true crime case always leave you scouring the internet for the truth behind the story? Dive into your next mystery with Crime Junkie. Every Monday, join your host Ashley Flowers as she unravels all the details of infamous and underreported true crime cases with her best friend Brit Prawat. From cold cases to missing persons and heroes in our community who seek justice, Crime Junkie is your destination for theories and stories you won’t hear anywhere else. Whether you're a seasoned true crime enthusiast or new to the genre, you'll find yourself on the edge of your seat awaiting a new episode every Monday. If you can never get enough true crime... Congratulations, you’ve found your people. Follow to join a community of Crime Junkies! Crime Junkie is presented by audiochuck Media Company.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.