All Episodes

June 15, 2023 44 mins

What if you could unlock the full potential of Generative AI and its impact on your life and company? Get ready for a fascinating fireside chat recorded live in front of an audience at the offices of leading international law firm RPC during London Tech Week.

The Actionable Futurist Andrew Grill was interviewed on stage by Helen Armstrong, a Partner in RPC’s IP and technology disputes team.

The discussion examined the risks, issues, and ethics surrounding this powerful technology and the roles played by giants like OpenAI, Google, and Facebook in this rapidly evolving space. 

This episode also covers the current applications and trends of generative AI in the retail and consumer sectors and how it's already making a mark on our daily lives. 

As we navigate the complex world of AI regulation, Andrew shared his insights on explainability, transparency, trust within AI systems, and the implications of the UK Government's white paper on AI. 

The episode also touched on the challenges of IP rights, GDPR, ongoing AI model training, and the importance of auditing systems to prevent bias.

Don't miss this thought-provoking conversation as we uncover the incredible potential of generative AI, its ability to unleash creativity, and the crucial need for ethical use of this game-changing technology.

We covered a lot of ground in this episode, including:

  • Generative AI and Its Impact
  • Chat GPT’s definition of a futurist
  • What is Generative AI?
  • Why AI is so popular now
  • The risks of using Generative AI
  • Why ChatGPT so confidently provides incorrect answers
  • How ChatGPT actually works
  • ChatGPT data sources
  • Is ChatGPT that useful?
  • The “magnet of mediocrity”
  • Where is Generative AI being used?
  • The “enthusiastic always-on intern”
  • The need for critical thinkers
  • The responsible use of AI
  • Challenges and Considerations for Generative AI
  • The AI black box problem
  • The challenges for regulation around AI
  • Can we trust AI?
  • Regulation areas for AI
  • Government response to AI regulation
  • Are you involving your risk department around AI?
  • Recruitment considerations for AI teams
  • The future of Generative AI
  • Enterprise AI Implementation
  • EnterpriseGPT challenges
  • Will AI provide us with more free time to be creative?
  • Actionable items for tomorrow
  • Your two tribes and the opportunity for a hackathon
  • Why AI comes at a cost
  • Is your data “AI ready”?
  • Will AI replace human creativity?
  • Adobe’s AI products
  • Accenture’s use of AI generated imagery in a report
  • Generative AI will drive more creativity

Audience questions included

  • Who is responsible for ensuring AI training data is valid
  • Will AI disrupt or strengthen the economy?
  • The environmental impacts of Generative AI
  • The difference between human emotional intellig

Thanks for listening to Digitally Curious. You can buy the book that showcases these episodes at curious.click/order

Your Host is Actionable Futurist® Andrew Grill

For more on Andrew - what he speaks about and recent talks, please visit ActionableFuturist.com

Andrew's Social Channels
Andrew on LinkedIn
@AndrewGrill on Twitter
@Andrew.Grill on Instagram
Keynote speeches here
Order

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:01):
Welcome to the Actionable Futurist podcast, a
show all about the near-termfuture, with practical and
actionable advice from a rangeof global experts to help you
stay ahead of the curve.
Every episode answers thequestion what's the future on,
with voices and opinions thatneed to be heard.

(00:23):
Your host is internationalkeynote speaker and Actionable
Futurist, andrew Grill.

Speaker 3 (00:30):
My name's Helen Armstrong.
I'm a partner here in the IPNTech team.
We couldn't let the London TechWeek pass by without a session
on generative AI.
Being a litigator by background, i thought what better way to
do that than to cross-examine Imean gently chat with an expert
in the field.
So I'm very pleased to bejoined by Andrew Grill.

(00:53):
He is a former IBN managingpartner, a trusted technology
board member, and I'm pleased tosay that he is also a host's
atoperated technology podcast,which we might even feature on
this chat.

Speaker 4 (01:12):
We're actually on the podcast right now.

Speaker 3 (01:15):
So, as an aside, i wondered what a futurist was,
because you are the ActionableFuturist on your podcast.
So I asked chat GPT, and theanswer that it gave me came with
the following warning It isimportant to note that futurists
do not possess magical powersto predict the future with
absolute certainty.

(01:36):
So there we go.
Chat GPT has already given adisclaimer, but, joking aside,
andrew has a huge amount ofknowledge and experience in this
field, and we're reallygrateful for him to give up his
time this afternoon to talk withme.
So I think we should probablyget started, as I think we're
going to have a lot of questionsat the end.

(01:57):
So we'll get cracking.
We'll start with the basics.
What is your definition ofgenerative AI?
What is it and what is it not?

Speaker 4 (02:07):
Well, generative AI is not new.
It's actually been around sincearound the 1960s, in very early
versions of chatbots.
Around 2014, the technologybecame better and we could start
to resemble humans in image andin video form.
But in 2017, google actuallydeveloped a thing called a
transformer.
So when you hear chat GPT handsup, who knows what the GPT

(02:27):
stands for?
It stands for generative,pre-trained transformer.
That's very techy speak, butbasically it generates things.
It's been pre-trained with lotsof data and the transformer bit
is where it actually works thatout, and what Google managed to
do was work out how tomassively scale this so you can
actually look at lots of data inreal time and in parallel.
And of course, we know thatthen became work that OpenAI did

(02:50):
with chat GPT versions 1, 2, 3,3.5 and now 4.
But generative AI isn't all AI.
There are many other systemsthat use AI.
If you unlocked your phonetoday, you're using AI.
If you've had a Spotifyrecommendation using AI.
if you're looking at a Netflixrecommendation, not only is it
using AI, it's starting to usegenerative AI to show you a

(03:11):
picture that you will like andit'll be different to Helen's
picture.
So you're actually seeing thisin everyday life.
So it is transformative, butit's not new.

Speaker 3 (03:22):
Now, ai has been around for a long time, as you
say, but there's been a realexplosion of interest.
It is everywhere You know.
Journalists are talking aboutit, musicians are talking about
it, politicians are talkingabout it, activists are talking
about it.
What do you see as the triggerfor this recent increase in

(03:43):
activity and interest in AI, andgenerative AI in particular?

Speaker 4 (03:47):
I know we've got Slido.
Let's do a market research here.
Hands up if you have used chatGPT.
If you haven't, I'm not surewhy you're here.
Keep your hands up.
Keep your hands up if you useit on a daily basis.
So a few people, That'sinteresting.
So I think it's fair to saythat while AI has been around
for a while, chat GPT has beenthe fire starter.
Who has heard of a comic calledJohn Oliver?

(04:10):
He was an English comment inthe US.
He recently did a thing wherehe did a montage of newscasters
basically going that sentencethat was written by chat GPT.
So it's now in the news.
My parents live in Adelaide,Australia, and I spoke to them
the weekend and they weretalking about chat GPT.
I said hang on, hang on, Wheredid you hear about this?
I was on the news.
So finally it's in the news.

(04:32):
Everyone's talking about it.
But why are they talking aboutit?
Because finally non-technicalpeople have access to AI systems
.
We all know how to send aWhatsApp message, to use a chat
bot.
The friction has been removed.
Two or three years ago, if youwanted to play with an AI model,
you'd have to be a developer,you'd have to write a Python
script, all this stuff.
But what open AI has done ismade it incredibly easy and

(04:56):
remove the friction And noweveryone can try it.
And they have these modelswhere they show how long did it
take to get to 100 million usersand chat.
Gpt got there in two months, So100 million people plus have
played with, including most ofthese people in the room, And it
has now opened up thepossibility because you can play
with the AI models.

(05:17):
But importantly and we'll getonto this it's also exposed the
risks, the issues, the ethicsthat are now, thankfully, front
and center.
But I'm here to assure you,even though futurists get it
wrong, we will not all be killedby AI robots in two years.
That is not fact.

Speaker 3 (05:35):
As a lawyer.
I like the fact you brought upthe word risk and we certainly
will move on to that later.
I think it's really interestingactually watching how many
people put their hands up.
I would be really interested toknow how many people have used
chat GPT in their personal lifeversus how many people have used
it in their professional lifeor are using it within their
business.

(05:55):
I don't know if we can put ashow of hands for the latter, so
if you're actually using itwithin your business at the
moment, interesting.
So we've got a few people, butcertainly not as many as are
using it in everyday and theirpersonal life.

Speaker 4 (06:10):
But are you aware that if you use it in your
business environment and you putconfidential information in
there, one it's GDPR are notcompliant.
Secondly, it stays there.
There is a slider you canactually turn on and off to say
don't capture the chat.
but they still keep it forabout a week or 30 days for
training and monitoring purposes.
But Samsung got in troublebecause some of their people

(06:31):
started to put code and boardminutes into chat GPT wanting to
summarise it, and guess whatThat is now in the training
model.
So be very, very careful whatyou put into these models.
GDPR still applies.

Speaker 3 (06:45):
Yeah, and I guess, if we're talking about the risks
of using something like chat,gpt, i mean, i'm sure a lot of
people read I think it was onBBC News about the lawyers in
the US who actually used AI toprepare some court documents,
including referencing cases thatwere, in fact, entirely

(07:05):
fictitious, and when the judgeasked them where they'd found
them, they had no answer but tosay, oh, ai made them up.
So I think it's calledhallucinations essentially, and
it'd be really good.
I know we're aware of it, butcan you just explain why AI
hallucinates and why it's soconfidently states things which

(07:26):
are actually completely false?

Speaker 4 (07:29):
Let's just go back to how chat GPT works.
I'm going to explain this inone sentence.
Chat GPT all it does is itpredicts the next word in a
sentence.
So if I ask you who was thefirst person to land on the moon
, you would probably confidentlysay the next two words are
probably going to be Neil andthen Armstrong.
So all that chat GPT and thesesystems do is they read billions

(07:49):
and billions of words and theythen look at patterns.
And so actually, when chat GPis giving you an answer, it has
absolutely no idea what it'styping.
It has no idea of the meaningor whether it's right or wrong,
and it says that with confidenceas well.
So the hallucination is when itis actually trying to match
things up and it gets into a bitof a loop.
There was another story in theNew York Times where a

(08:09):
journalist was chattingovernight with one of the models
and it started to say youshould leave your wife and I
love you.
And it kept saying I love youand it just wouldn't get out of
this loop.
So it had gone into this loopand the only other answer that
it could find that matched thepattern was that I still love
you.
So that's when it starts tohallucinate, and the challenge
is the data that's in.
So show of hands.

(08:31):
who knows the data that was putinto chat GPT to train it?
We think there are five sources.
OpenA and R have not been openabout that.
We think the first one is theopen internet called
commoncrawlorg.
You can go there and downloadall of the internet free.
So there's a lot of rubbish inthere.
The second thing is Redditlinks.
There's a website called Reddit.
If you have more than threeupvotes.

(08:53):
that went into chatGPT.
The next two sources are somebooks unpublished books and a
small amount of different wordsand books from there.
The first one was Wikipedia.
So when you look at those fivesources, they're not all
completely qualified.
Now, in OpenAI's defence, whatthey did do is they hired 40
people from Upwork to go throughand do a series of tests that's

(09:14):
called reinforced learning.
So they would ask it questionsand the 40 people would then go
and say is this a good answer ora bad answer?
Now, importantly, those 40people were sourced from diverse
backgrounds.
You don't want people just likeme answering questions about
ethical or political bias,because I will have my own
conscious bias built into that.
So we'll get onto this, butpart of the challenge is, if the

(09:35):
data is questionable and youhaven't checked it with a
diverse range of humans, you'regoing to have this hallucination
problem.

Speaker 3 (09:43):
So let's cut to the chase.
Is it really that useful if wecan't rely on it?

Speaker 4 (09:50):
OpenAI.
What they did is they launchedsomething to the market.
That was a good test.
It wasn't quite ready.
Now Microsoft although they'reinvestors in OpenAI and
certainly Google and Facebookthey've had to scramble and go.
We can do that too, but Googleand Facebook have more to lose.
In fact, when Google launchedGoogle Bard, they had a video
that explained what it could do,and one of the questions was

(10:11):
about the first satellite toview moons outside of the Earth.
And it got it wrong AndGoogle's share price dropped
$100 billion in a day.
So the risks to Google ofgetting it wrong are quite high.
So is it useful?
I think it's incredibly, anincredible watershed moment.
Would open AI done?
It allows people in the room toplay with it and test it.
It means that we're going tosee the edge cases What is it

(10:34):
that works and what doesn't work?
And initially, i think youcould actually say how do you
create a bomb?
And it would tell you.
Now it quite confidently saysmaking a bomb probably isn't a
good idea.
Here's a number to call to getsome help about doing that.
So is it useful?
I think there are lots of betterthings we could use AI tools on
, like queuing, cancer, climatechange, those sorts of things,

(10:55):
but at the moment it's a bit offun, and I'm sure you've all
played with those imagegenerators to make different
images of yourself and yourfriends, and those sort of
things.
What worries me, though, iswe're going to see what one
writer called the magnet ofmediocrity, that everyone's
going to be using chat GPT, andall the answers are going to be
fairly similar.
So how do you then rise abovethe noise?
In fact, i was talking beforeto one of my colleagues about an

(11:17):
example of a law firm that wasactually wanting some interns,
and 80 out of 100 people thatwrote in had used chat GPT, and
all the answers were the same.
Apparently, they've been toldnever to apply there ever again.
So what worries me is you'regoing to have all these people
using chat GP to write awfulcontent, flood the world with
awful content, and we're goingto go.

(11:38):
Why have we done this?
So I'd like you to play with it.
I'd like you to then think ofthings we could do with it to
actually improve humanity.

Speaker 3 (11:47):
So it's about understanding the limitations of
the technology and reallyensuring that we increase
creativity, that we don't allowit to stifle it, because
obviously it's trained on datathat's previously been produced.
It's not itself producingentirely new creations, as it

(12:07):
were.

Speaker 4 (12:08):
That's the whole point is generative, So it has
to have had something to work on.
So to know that Neil Armstrongis the right answer we had to
read that somewhere.
Famously, Getty Images has beensuing Stable Diffusion, one of
the image recognition companies,because Getty Images their
publicly available stock imageryhas a big Getty watermark.
And guess what, When theygenerated their own version of
Abraham Lincoln using that,there was a stretched Getty

(12:30):
watermark.
And so Getty said that's notcool, because you're using our
proprietary IP rights ingenerating something else.
So that's a problem as well.

Speaker 3 (12:42):
So I guess, following on from that, where are you
seeing this technology beingused already And what are the
trends that you're seeing outthere?

Speaker 4 (12:49):
Some really cool examples.
So if you have to create lotsand lots of training videos, you
probably go and hire some ofthe guys in the back with the
camera.
You'll set up a green screen.
You'll have a presenter readthings off the auto queue.
They'll get it wrong.
It'll take half a day.
You can now use videosynthesized video presenters.
In fact, in my talks I startoff, i get introduced by a
synthesized lady who actuallyintroduced me onto the stage And

(13:10):
it's almost there And one ofthe other day.
There's a company calledSynthesia.
It's a UK company based out ofVoxxbot and Cambridge graduates.
You can up like a photographand it will animate your eyes,
your lips and it'll look likeyou're talking.
Now it's almost there.
But there are some reallyinteresting uses, like that
Where I use the Netflix example,where you've got to mass
produce different variations ofimage and creative.

(13:33):
That can be useful as well.
One of my clients sells DIYproducts.
Rather than going andphotographing every single model
they have from every singleangle, they could use generative
AI to replicate that and havesynthetic photography.
The challenge there is Do Ibelieve that what I'm seeing is
a real photograph or has it beensynthesised by AI And that is

(13:53):
going to be an ethical issue aswell.
Do we disclaim that what you'reseeing has been created by an
AI model?

Speaker 3 (14:00):
Interesting.
And so what are theopportunities, particularly in
the retail and consumer space?

Speaker 4 (14:08):
Well, part of those are inventory.
So, again, if you've got to dolots of photoshoots, you can
start using that for thatimagery.
We talked before about themetaverse.
You can combine virtual triads.
If you're in the retail spaceand want people to try out what
they might look like in certainclothing or certain things that
can be used, you can generate adevice for that as well.
Seo if you have to run searchengine optimisation, that can be

(14:29):
used to write great copy that'sactually going to resonate.
Well, you can test it multipletimes.
Writing reports Microsoft noware starting to launch a thing
called Copilot.
I saw it demonstrated at LondonTech Week on Monday on stage.
You can start from a blank pageand say, okay, i've got to
write this report.
Why don't we feed it themeeting minutes from last week
and a product brochure and itwill then in real time create

(14:51):
you the first draft of thatreport.
Now I liken chat GPT to anenthusiastic always on intern.
He or she does a great job.
They're so enthusiastic,they're so confident, but you
wouldn't give that work raw to aclient or a judge or publish it
on the internet.
So I think it's a great start.
So a lot of this generative AIcan be used as a great first

(15:13):
draft, as I say, from gettingthe ideas from a meeting onto
the page and then bringing it tolife.
So part of it is actuallyplaying with it.
Playing with it with imagery,with video, with text, with
music.
You mentioned music before.
I went to an event a few weeksago where the music had been
generated the lyrics, the notesand the melody by AI.

(15:35):
It was awful.
It just had no soul.
So that's the thing, themediocrity about this.
You actually have to say it's agreat first draft, it's good
for research, but I'm going toapply some critical thinking on
top of that to make it somethingI would want to give to a
client.

Speaker 3 (15:50):
So we're not all being done out of the job just
yet.

Speaker 4 (15:53):
No, but what this will mean is if everything is
the same.
In fact, here's a good test Ifyou want to see what average
looks like on the web, typesomething in chat GPT.
What it spits out will beaverage.
So don't do that, and certainlyif you're applying for a job,
don't do that, because it'll bethe same.
I think what's going to bereally important for students
coming through is the ability tobe critical thinkers, to

(16:16):
evaluate that.
Your years of training as alawyer, the presence you've gone
through being on your feet infront of a magistrate or a judge
, knowing how to react thatcan't be replicated with AI.
Where you have empathy, whereyou have feeling, where you have
to have critical thinking Idon't think that will ever be
replaced by AI.
It might help me get close toit, but what we're doing now, i

(16:36):
really think can't be replicatedby technology.

Speaker 3 (16:39):
So it's the emotional intelligence behind it.

Speaker 4 (16:41):
Yes, and that's one thing that AI will never do.
All the experts I've spoken toin my podcast says that AI will
never love and it will neverfeel empathy.
So if you like those two things, you're okay.

Speaker 3 (16:53):
But it will repeat I love you over and over again, If
you program it too.

Speaker 4 (16:56):
It'll do anything you want.

Speaker 3 (17:00):
So we can't talk about generative AI without
looking at the responsible usesfor AI.
Can you just elaborate on thata bit more?

Speaker 4 (17:08):
Yeah.
So we have a real point in timehere where we have to think
about the impacts, and so, ifyou look at what banks do, they
have model risk groups thatbasically develop models, they
test them and then they put theminto production, because they
know that the FSA are going tosue the pants off them if they
get it wrong.
What we need to be worriedabout, though, are bad decisions
that are done by generative AImodels.

(17:29):
Right now, in my home countryof Australia, there's a Royal
Commission into robo debt.
The government back in 2015,decided they would run an AI
over welfare recipients had theybeen overpaid if they had, by
how much?
and we're going to send thedebt collectors after them.
But what they did was they usedaverage income rather than the
income they received on a veryspiky basis, and so the Royal

(17:50):
Commission is saying youactually had some very bad
decisions because you used AI.
Now, if I have a bad Spotifyrecommendation, that's okay, but
if I'm denied credit debt orhave to leave the country,
someone's got to be able toprove, and I'm sure across the
other side of the aisle, you'regoing to say can you prove what
was the input to that model andwhat was the output.

(18:10):
The problem with AI systems isthey are a black box.
We don't know exactly how theywork.
So you're going to start to seeleg distillation around
explainability Can you explainhow it works?
If someone's income is this,why was that decision made?
You have to have a thing calledobservability.
When the model is running, canyou watch to see if it starts to
hallucinate?
Is it actually going off therails?
And the other thing istransparency Can you actually

(18:34):
explain and publish how this isgoing to happen?
So the challenge is going to befor the regulator no, the next
session is talking aboutregulation is do we regulate the
tech or do we regulate the useof the tech?
And I think there's a fineimbalance between the two,
because if you can say, yes,this is why the decision was
made, there are existing lawsthat cover it around
discrimination, all those sortof things.
But I think, because AI is thisblack box, we go well, forget

(18:56):
about GDPR, we'll think abouthuman rights law and we'll just
see what AI does next.

Speaker 3 (19:01):
It's really interesting because you
mentioned their explainabilityand transparency.
I think it all comes back downto trust.
We have to be able to trust theAI.
If consumers don't trust the AI, they won't use it, so they
won't engage, and I think that'swhat's so important And that's

(19:22):
why it's obviously me bought upin the government's white paper,
one of the five principles thatare set out there.
So, moving on, i thinkobviously we're going to talk
about regulation in some detail,but could you highlight some of
the issues that we face?

Speaker 4 (19:42):
We already covered a few of them.
So, if you're the Getty Images,one was interesting.
So the issue of IP and rightsSo, because the open internet
has been used to train chatGPT,there will be information that
is public but is owned bysomeone else, and so how do we
actually manage that?
And even on GDPR, the right tobe forgotten if you put into
chatGPT who was Andrew Grill, itgets some things wrong.

(20:03):
It gets them wrong in a niceway.
It says I've won some awardsand written a book, which are
both not true yet, but maybeit's a futurist.
But basically, how do I then goto open AI and say that
information is wrong and retrainyour model?
So that's going to be difficultas well.
The issue around copyright andIP At the moment, if you're in
the US and you'll know more thanI do, only a physical person
can be given IP or rights.

(20:26):
So if you're generating thiswith AI, where's the fine line
between I told at the prompt, soI own the rights to that thing,
it's developed or it's beendeveloped by tech, so you don't
own the rights?
And that's going to come intoplay as we are mass producing,
mass generating these images andvideos and music.
Who owns that, and no one's gotthat right yet.

(20:46):
What's interesting and maybethe last panel will cover this
the UK, back in March, broughtout a pro innovation AI white
paper.
In the last few weeks, rishihas said maybe we need some
guardrails.
And what do you think, sam?
And what do you think?
So it is evolving.
I'm writing a book at the momentand I don't know where to start
and stop because every minutesomething changes.
I started writing before ChatGPexisted, when I published.

(21:08):
It will be ChatGP six or seven.
The regulations will change, sohow do I write a physical book
with that in there?
But these issues will not goaway.
The challenge in everytechnology that's been released
I've seen whether it beMetaverse, web3, iot the
regulators can't keep up.
They don't know all theintricacies.
They need industry to help thembring together.

(21:28):
Sam Altman, who is the CEO ofOpenAI, has been doing a world
trip.
What I'm reading is that he'stelling everyone that we might
be killed by robots in a fewyears.
So regulate against that andforget about all the other stuff
that's happening near term.
But issues around rightsmanagement and those sort of
things are paramount, andeverything you're doing today
you should be running throughyour risk department.
Is your risk department even upto speed with the fact that

(21:50):
this tool exists in yourorganization?
Because someone's going to getsued.

Speaker 3 (21:55):
Yeah, and I can certainly say that we are seeing
it already.
Clients are coming to us askingus about the IP rights, in
particular, who owns the outputfrom a generative AI model.
There's also confidentialityissues obviously going in and
then down the line you knowwe're thinking about, you know
what will be the disputes inrelation to the ongoing training

(22:18):
of the model, because you knowmy background is in disputes
arising out of large IT projectfailures.
Now, often the testing in thoseprojects is a one-off testing
at the end of implementing thesoftware Does it work or does it
not?
Actually, the problem with AI isthat it keeps changing as it

(22:39):
trains, as it, more data isinput, so so it's not a question
of just deciding at the pointof implementation is it working?
but actually, you know, in ayear is it working, in two years
is it working?
How are we going to deal withthat, both in the contracts but
also just in practice?
how are we going to audit thesystem continually to make sure

(23:02):
that those anomalies aren'tcoming up?
and also not just the the falseoutput, but the bias that might
be there, the underlying bias,and how are we going to stop
that?
so I think those are importantthings that we are lawyers, as
lawyers are all grappling withat the same time as everyone in
the business, so I think you'respot on.

Speaker 4 (23:25):
Well, if you work in HR and you're recruiting for
roles in AI AI technicians model, are you recruiting people with
a diverse enough backgroundthat, when they this role become
really important because theyhave the keys to the kingdom and
when they've trained it, as yousay, and set it off to work,
have they trained it with thebias built in?
Famously, years ago, the GoogleBros started some image

(23:46):
recognition systems that was notable to recognize people of
color because they never thoughtto train it on people of color
because they were white males.
So there was a conscious biasbuilt in.
They went oh, forgot about that.
I was having discussion lastnight, a networking event.
You need the gray hair and thethe iPhone babies to basically
be talking together and actuallyunderstanding both aspects,
because one use of thetechnology there may be some

(24:07):
unintended consequences wehaven't thought about and
different generations willactually pick those up.
So the model training that'sprobably something not many of
you have thought about isincredibly important and lawyers
need to be across it, hr needto be across it, the board need
to be across it, because thiscould go how I especially if
you've got lots of people ormoney at risk that is being
decisions are being made usingAI And there's the data issue as

(24:31):
well.

Speaker 3 (24:31):
So you've got your data scraping, all of the issues
that come with that.
You know the government in thewhite paper has mentioned.
You know you have to stillcomply with the GDPR.
You can't just say, because theAI model did it, i'm not
responsible for any breachesthat result.
So I think everyone needs to bevery conscious of the fact that
, while the legislation wasn't,wasn't made with AI in mind,

(24:54):
they still need to comply withit at the end of the day.
So you may not have magicalpowers, but you are the
futureist here.
So what is the future ofgenerative AI?

Speaker 4 (25:06):
So I think it has a bright future.
At the moment we've got thetraining rules on, we're playing
with it, we're finding sillythings to do with it.
But imagine your boss comes toyou and says Andrew or Helen,
what's the best performingproduct from this time last year
?
what should be released fornext, next season?
you go away and get the datayou might ask your intern to do
that and probably hours dayslater you come back with an
answer.

(25:26):
Imagine if you had ethicallyand legally trained a generative
AI model with all of yourcompany's data.
You could ask at the samequestion and it comes back in
seconds.
So I think the power ofgenerative AI is going to be
enterprise GPT and it'shappening already.
I saw a demonstration of thenight where you can actually
load data into that and itstarts making sense of it, and

(25:46):
that is a different type oftraining.
But it means it's firewall,it's your data, it's secure.
But imagine having everythingand not just product brochures
but matters you'd worked on,everyone in the company's
information.
So who is the best person totalk to about AI here at RPC?
you could probably ask lots ofpeople, but imagine a chat bot
coming back to say here's theperson, here's the sources to

(26:08):
why we think this is the personand this is their contact
details.
So that is the future.
The challenge, though, is it'sexpensive.
It's about ten times moreexpensive to do a search query
than a generative AI query, thana chat, than a search query,
because of the computationalpower needed.
So right now, microsoft issaying to ration their AI level

(26:28):
servers to say you can't have asmany as you need because we
don't have enough to go around.
So enterprise GPT is the future.
I think we're seeing it inpockets.
It's expensive, it's hard totrain.

Speaker 3 (26:41):
When we get that right, i think everyone in this
room will be typing queries inand delighting customers in
seconds, not weeks, and actually, if we think about it, if we
use AI to do some of the jobsthat are more mundane, shall we
say, it frees us up to actuallybe much more creative and to
spend that time really thinkingabout things and getting into

(27:04):
understanding what else we couldbe doing, rather than just
doing that every day, which isvery exciting you say that and I
have two views on that.

Speaker 4 (27:12):
Yes, i totally agree with you and I want AI to
basically fix the money.
Took my life when I have torenew things and buy things, but
when we got these, we were toldthis would actually give you
some freedom.
Everyone has it with and youprobably checked a multiple time
.
So the challenge is and thisone of my guests in my podcast
talked about needing the freedomto think I think famously with
Steve Jobs, who said many yearsago if 30 years ago in Silicon

(27:34):
Valley we had all of thesesocial media networks, nothing
would have been developed.
We would all been playing withthings.
So we actually need to say, yes, this will free up some
creative time, but can we usethe AI to go?
no, stop looking at that, don'tdo that.
That's not an important task.
What Microsoft's co-part will doin your email is basically say
let's summarize what you need todo.

(27:54):
You need to respond to this.
Here's something I preparedearlier.
So I think you're right.
It has the promise to free usup, but because human beings can
be a little bit inquisitive,curious, we may do things.
Even this morning, i gotdistracted on different things
before I got here.
So it has the promise.
We just need to not have thethe attention deficit and have
freedom to think and then,finally, your podcast is called

(28:19):
the actionable futurist.

Speaker 3 (28:21):
So so what actionable things should the audience and
we all be doing so say tomorrowto make the most of this new
technology?

Speaker 4 (28:30):
so keep playing with it.
So if you just had a dabblewith it, keep playing with it.
I often ask my my clients userfor two purposes type something
work related and type somethingto be with your personal life.
And I did a session a few weeksago and it gave my clients some
homework.
I said, before we do this nextweek, i want you to sign up for
an account, because not all ofthem had who will be blocked by
the firewall?
just sign up.
I want you type something in todo with your personal life.

(28:52):
And one of my delegatesbasically said I asked it to
write a poem about my dog andshe said it was actually really
good and it explained thenuances and the fluffy the dog
or whatever it was.
And I said how long would thathave taken you to actually do?
probably most of a day.
And it did it just like that.
So why I did that was?
you then lean forward, you go,ah, that's how it could be used
in my personal life and youbecome what I call more

(29:14):
digitally curious and you thensay, well, let's actually look
at what imagery can do.
Can we do something with music?
could we do something with ourproduct department?
can we do so?
by playing with it and becomingmore digitally curious.
Then you are wanting to explainand understand more what it can
do, understand how roles willchange.
I had one podcast guest saythat the role of the developer
will probably morph into more ofa product manager because the

(29:35):
code be written for them, sothey are then managing their
creation and becoming more of afeature person rather than how
does the code work.
And the other thing is in everyorganization you actually have
the answers.
They were ready.
You have two tribes.
You have the going digitalpeople like us that have been
there for a while And you havethe born digital.

(29:56):
Their first toy was an iPhone.
They live and breathe thisstuff, and what I often ask
people to do is hold a hackathon.
You ever know what a hackathonis.
You get in a room like this andyou look at five or three key
business problems And whathappens is the born digital and
the going digital.
Both have a point of view whichis very, very different, and
you can probably solve thesethings in an afternoon.
So embrace the people you havein your organization, because

(30:20):
the dirty little secret is thatAI is not intelligence, not more
on.
It relies on humans to train it, to set it to work, and that
creative thinking is soimportant.
So I want you to embrace anduse all the people you have in
your organization to come upwith other ideas, because the
answer isn't always we'll use AI, just as the answer hasn't
always been blockchain or themetaverse.

(30:40):
It might just be good oldgetting around the table and
solving the problem.

Speaker 3 (30:45):
So it's a bit like Jason was saying in the first
session.
It's really thinking aboutwhere can we use AI, where would
it be helpful?
It's not just a case of goingoh, we want to use AI because
everyone's using AI And I'veheard AI washing being thrown
out there.
So it's really thinking aboutwhere can we use fully use AI,

(31:07):
and then where do we not want touse AI?

Speaker 4 (31:10):
Because it comes at a cost a cost of training it,
setting it to work, buying it,the risk issues, all those sort
of things.
It is not free.
You have to pay for chat, gptto get more features and
eventually there won't be a freemodel.
You have to pay for more of it.
So look at the cost benefit.
Is there a value exchangebetween that?
But also look at what you candigitize already.
Are there processes that you'vebeen dying to put into a

(31:31):
digital format?
Can you do that?
And I know we're out of time butthink about the data.
You have The challenge withthese AI models.
You have to train them withdata that makes sense.
So think about the data youhave, the data you need and the
data you'd like.
And where might you get thatfrom?
Because you may not have datathat's actually AI ready.
So it could be a moot pointthat we want to do AI, but our

(31:52):
data is all in spreadsheets.
There's no way we could trainan AI system on spreadsheets.
So it comes at a cost, but ithas huge benefits.

Speaker 3 (32:00):
Well, I'm sure we've got questions out there in the
audience.

Speaker 5 (32:04):
Thank you very much.
Very exciting topic, veryexciting conversation.
I would like to, if I gotAndrew right on the creativity
topic, to come back on this bit.
If I got your point right, youthink that not you only, but
many people think that AI willnot replace, like humans,
creativity And at the same timewe were saying that we do not

(32:26):
know what exactly happens insidebox.
Yeah, it's black box, and whenwe speak about humans,
creativity, the main challenge,as I know at least, is that it's
also kind of black box, yeah,like scientists working to
figure it out.
So could you maybe explore abit more on that Why we are so

(32:46):
sure that AI is not creative andit's not replacing humans to
certain extent and cannot belike creating itself?

Speaker 4 (32:55):
Lawyers love precedence.
Let's go back to someprecedence When the phone came
out, that would make our lifeeasier.
We're now doing more and morethings like that.
When the lift came out to.
There are lots of precedencewhere we thought that these jobs
would be wiped out, but theyhaven't been.
If you look at when you let'slook at creativity and doing
imagery with generative AI.
Now, i'm not a graphic artist.
I use things like Photoshop.

(33:16):
Adobe have now started to putinto their Photoshop product a
thing called generative AI fill.
You can literally circle.
There was an example they had ofa picture of a woman on a bike
in the desert, and so what theydid is they lassoed this area
and they said we would like red,we would like yellow road lines
put in there, and so itbasically went off and generated
the road lines and inserted itinto the thing straight away.

(33:36):
You still need some creativityto actually come up with the
idea for that, and if you usesome of these tools like
mid-journey or stable diffusion,you have to type in a prompt.
There's a really interestinguse of this from a company
called Accenture.
Everyone knows Accenture.
They did a report on digitaltrends and littered through the
report.
They used imagery that actuallycorresponded to a part of it.

(33:58):
What they did, though, is theysaid here's the text that
generated that image.
Here are the additional promptswe needed.
Now their prompts.
I'd never heard of plus eight,vdr, all these things that only
a graphic artist would needwould know about, so I think
part of it is we have to havethe idea in the first place.
Generative AI generates fromsomething that's already there,
so the creative spark has to bethere.

(34:18):
It can make it a lot easier Thepodcast we're on right now.
In olden days, i would get theaudio tape, and I would
literally splice it together anddo all those sort of things.
I don't have to do that.
I use Adobe Audition.
I will take it.
All the breath noises, all themistakes that we made will all
be gone using AI and those sortof things, so I still have to be
creative and look at how I wantmy audience to experience that.

(34:39):
So I don't think creativity isgoing to be stifled.
I think it'll be enhanced.
Which graphic designer thesedays draws with pen and paper
for a campaign?
They use a tool.
So, and you're other, talkabout creative thinking.
I think it will actually openup Some of these things I've
seen.
Some of the examples have beenmind blowing in actually

(35:01):
visualizing how things mightlook that we've not even thought
about.
So I think the generative AIwill actually spawn more
creativity.
Everything I've seen and mypersonal view is that it's not
going to stifle creativity.
It's another tool to makehumans even smarter.
Where we get to worries mebecause at some point where the
neurons in here, we can't be anysmarter than we are, so at some

(35:21):
point we will saturate with allthese tools around us.

Speaker 2 (35:24):
In your opinion, andrew, if AI has got the
potential of being moreefficient or there is a
quotation mark better thanhumans, who is responsible, in
your opinion, to ensure the datathat we key in to train AI?
is quotation mark correct?
For example, a fork is to beused to eat food with and not to

(35:48):
stab someone in the eye.

Speaker 4 (35:51):
This is a really interesting question.
It's almost what is the truthAnd I one of my podcast guests,
a lady called Stephanie Antonianwho worked for a deep mind and
a bunch of other AI startups asan ethicist, posed this question
.
I put it on LinkedIn, i gotshot down.
but let me try it.
What about if we had an opensource version of the truth?
So the fork analogy is it's lawthat the fork cannot be used to
stab someone because that'scalled a weapon, but it is a

(36:13):
agreed truth that the fork isused as an implement.
So is that on a open and fullyaccessible database?
that has to be checked.
So when someone does a queryand it's washed against that it
then has to look at the open AIor the open database to say is
this correct?
I put this on LinkedIn and Igot shut down because who then
maintains the database and whatgoes on, what goes off?

(36:33):
what is the truth, what is nottruth?
So I think there are ways tolook at that.
Technically we could do that,but then you have people going,
but that's not right.
So if I say who won thepresidential election last time
in America, there would beprobably a very difference of
opinion in this room.
So what is fact and what isagreed and what do we wash that
against?
So that's something that can besolved with technology, but

(36:55):
dear old humans get in the wayand we have a point of view and
we have a conscious bias.
So that's the answer, i think.
whether it will work is to betested.

Speaker 6 (37:04):
What are the big policy concerns in tech?
There's been obviously concernsaround big tech And I was
wondering how you saw AI.
On the one hand, you could seeit as a source of disruption,
but on the other hand, youmentioned how some of the
technology has been developed byGoogle, microsoft's investor,
and so on.
So is it too soon to say, or doyou have a sense of whether
it's going to disrupt the incomeof the saw all strengthening?

Speaker 4 (37:25):
The problem isn't we've alluded to this before
that there's a cost to doing AI.
There's a generative cost Andit's 10 times more to generate
AI than normal search queries.
Which is why Google haven'trushed into doing exactly what
OpenAI have done, becausethey've got a search business to
protect.
Now you'll start to seegenerative queries enhance the
search results, but someone'sgot to pay for that.
Sam Altman actually said canyou please stop asking at

(37:45):
queries because we're blowingout our budget with Microsoft?
Originally Microsoft werepaying.
They were giving Azure creditfor free.
Now they've invested $13billion.
It costs a truckload of money torun these services, so that is
going to be the domain of thosecompanies that can afford that.
So very early models of GPT-1and 2 could be run on a desktop,
just as you could mine Bitcoinyears ago on a desktop.

(38:06):
Now you need lots ofcomputational power, so will, i
think, select the big playersthat have that.
And then the question is if weneed access to these tools, is
it equitable?
The question remained back inthe whole.
We don't talk about Bitcoinanymore.
No one's talking about it todayat all.
Bitcoin is incredibly energyinefficient because of the way
that it mines and checks for thechain.

(38:28):
Genitive AI is also incrediblyinefficient, so at some point
someone's going to go.
We're killing the planet again,and even faster, by asking for
questions on Genitive AI.
So I think it will be initiallythe domain of the companies
that can afford it because ofcomputational power.
As the smart boffins work outhow to make it more and more
efficient, that price will comedown.
But I think for the determinantfuture it'll be big tech that

(38:52):
run that, and that's a danger,because big tech is then the
gatekeeper as to how thesemodels get trained.
So the faster we can getEnterprise GPT out there and
start using it for effect insideorganizations, i think we can
then leave the silly queries topeople on OpenAI and Chat GPT.

Speaker 7 (39:07):
You've said multiple times about AI's inability to be
creative and have kind ofemotional intelligence and
things like that.
But what is the true differencebetween having emotional
intelligence and creativity andreplicating emotional
intelligence and creativity?
Because we already know thatChat GPT has been able to find
creative solutions for problems,especially earlier this year

(39:29):
when Chat GPT was able to hiresomeone to pass a recapture test
for it, so clearly it's able toreplicate some degree of
creative thinking.
So is there a differencebetween replicating creativity
and actual creativity?
Because clearly it canreplicate it and probably will
have the ability to replicate itfar better as the technology

(39:51):
advances.

Speaker 4 (39:52):
That's a great question, but if I go back to
the example you gave, i don'tthink beating captures is
creative.
I think it's sneaky.
It's just learned how to dothat because it learned how many
traffic lights.
I hate those, by the way.
I really, really hate them.
If you go into my website,there is a capture, but it's
hidden And it does differentchallenges rather than having to
do traffic lights.
I think you have a valid point,though, but I still think and I

(40:13):
hope, because I'm a human beingthat we have a streak of
creativity and we will always beahead of the AI because we have
to tell it what to do.
And I think good example NickCave, the singer.
Someone did on ChatGPT write mea song in the style of Nick
Cave and sent it into the cave,and he was appalled.
He said there's no emotion,there's no feeling in this song.
It's not the way I would havewritten.

(40:34):
And he got really angry becausehe said creative people have
this streak.
That's going to say choir.
So I think I'm happy that.
I think humans will still rulethe overlords.
I think it'll get close.
When you first used ChatGP, andit looked like it was a human
typing something back to you.
You thought that's pretty smartSo it can fool you most of the

(40:55):
time, but I think even thesedeep fakes we're seeing we can
go.
It's called the deep valueproblem.
I think that is tech.
But it's so good, is it really?
How can we tell it apart?
The question I would ask is howcan we tell fake and real?
apart that, if we have a thingthat is professing creativity
and empathy and love andemotional intelligence, is it

(41:16):
really me or is an AI version ofme?
I think that's the scary thingto look for.

Speaker 8 (41:20):
When I first saw about ChatGPT4, it was so
exciting I tried to use it to dosome stuff.
It was really good at the basicthings And I think there was
this overhype that it was goingto replace humans very quickly,
but it became evident.

Speaker 1 (41:32):
That's not the case.

Speaker 8 (41:33):
So I think one of the cases you've made is that
humans will always be theoverlords, but I'll be judging
it based on the currentiteration of the tech.
And as things get more advancedisn't it possible that the AI
does become creative, doesbecome able to formulate the
problem and the solution.
Is that something that youthought about?

Speaker 4 (41:50):
Yeah, it's going to get close to great creativity.
So if you had a 12-year-old onstage, their experience of life
is going to be very, verydifferent to ours.
So they can be creative, butyou would go.
Well, that's a child's versionof creativity And I think that's
where it's at the moment.
I agree it'll get better andbetter And it will start using
what's called multimodal.
So ChatGPT4, now you canactually show it a picture And
there's a great one.
It basically had a VGAconnector that plugged into a

(42:14):
phone that looked like alightning connector And it asked
it why is this funny?
And it worked out because itwas ironic.
So it's getting closer.
I think it's going to get quiteclose.
And then the challenge I said tothe other question is how can
you tell the two apart?
So you're right, at the momentit's a great first draft.
Or GPT4.
It's a great fourth version.
We will get better and betterbecause as humans we're going to

(42:34):
go oh, i wish I could do thatAnd we'll start the trainer that
way.
But I think it'll get close,but I don't think it will be as
good as the best person theirfield can be, that someone who
is an amazing public speaker ora major surgeon or amazing
orator.
They will be the standoutperson, because they just blow
the room away versus someonewho's OK.
So I think we'll still havepeople that can be that level

(42:57):
above.
But you're right, gpt6-7 willget closer and closer to
perfection.
I hope it doesn't get there,though, because I like being in
a room full of people.

Speaker 3 (43:07):
I think we have to end on that note because we have
run out of time.
Thanks very much, everyone.

Speaker 1 (43:12):
Thank you for listening to the Actionable
Futurist podcast.
You can find all of ourprevious shows at
actionablefuturistcom And if youlike what you've heard on the
show, please considersubscribing via your favorite
podcast app so you never miss anepisode.
You can find out more aboutAndrew and how he helps

(43:32):
corporates navigate a disruptivedigital world with keynote
speeches and C-suite workshopsdelivered in person or virtually
at actionablefuturistcom.
Until next time, this has beenthe Actionable Futurist podcast.
Advertise With Us

Popular Podcasts

Stuff You Should Know
Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

Las Culturistas with Matt Rogers and Bowen Yang

Las Culturistas with Matt Rogers and Bowen Yang

Ding dong! Join your culture consultants, Matt Rogers and Bowen Yang, on an unforgettable journey into the beating heart of CULTURE. Alongside sizzling special guests, they GET INTO the hottest pop-culture moments of the day and the formative cultural experiences that turned them into Culturistas. Produced by the Big Money Players Network and iHeartRadio.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.