All Episodes

January 3, 2025 • 41 mins

Send us a text

Join us as we explore this compelling question with Rachel Free, an esteemed patent attorney and the Vice Chair of the Computer Tech Committee at CIPA. Rachel takes us on a fascinating journey from her initial studies in chemistry to her foray into artificial intelligence, sharing intriguing insights into human-AI collaborations and the ethical dilemmas they pose. Along the way, we engage in some playful banter about verbal tics, adding a light-hearted touch to our deep dive into the crossroads of AI and intellectual property.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:03):
Lee Davis and Gwilym Roberts are the Two.

Speaker 2 (00:05):
IPs in a Pod and you are listening to a podcast on
intellectual property brought toyou by the Chartered Institute
of Patent Attorneys.
Good afternoon, Sir Amalot, SirAmalot, Sir Amalot.
Good afternoon Sir Um-a-lot,Sir Um-a-lot Sir.

Speaker 3 (00:26):
Um-a-lot.
Are you picking up on somethingthat I may have said in our
little friendly chat before thepodcast starts?
It's embarrassed me in front ofthe listeners.

Speaker 2 (00:35):
I've never noticed how much you um.
I like the name Um-a-lot and Ithink we should be counting.
So, opinio, I know you'relistening in.
Can you count how many timesLee says um or er during this
podcast, please, because itmight be really distracting for
the listener.

Speaker 3 (00:51):
I think I've got a lot better at it.
By that I don't mean I do itmore.
I think I do it less, becausewhen you become a podcaster, I
think you listen to yourselfmore than you ever do normally
and you become conscious ofthese little mannerisms in your
speech.
So what I'm doing at the moment, william, is a little tactic

(01:12):
where I stop and, rather thanfilling that stop with uh or um,
I'm no longer embarrassed bythe fact that I stop.
I'll just take a pause and thenI'll speak again it sounds
ridiculous.
Please say I'm more um do youthink, please, please, go back
to umming and ah-ing.

Speaker 2 (01:26):
I was in Japan once.
I love trying to pick uplanguage and I always fail.
But I noticed that they keptsaying ano all the time and I
thought, oh, that must be areally important word.
In Japanese.
That's, like you know, absolutekey word that uh is.
It drives conversation forward.
So eventually I said what doesano mean?
And they said it's Japanese forum, which made me laugh.
Um.

(01:46):
Second point what do I do?
What's my irritating verbaltick?

Speaker 3 (01:52):
oh, you do tend to turn your head frequently.
Uh, which is fine when you'vegot your little kind of in-ear
mic thing in, because it turnswith you, but, um, but sometimes
you don't turn away and youkeep talking and you've moved
away from the microphone, youmove back and your volume goes
up and down.
But it doesn't annoy me enoughfor me to have ever told you you
did.

Speaker 2 (02:09):
Then I started this.
I'm sorry.

Speaker 3 (02:12):
I'm sorry how's your afternoon gone?
Second podcast recording of theday yeah, we've had some banter
though so shall we crack on?
Shall we get our guest on?
Yes, yeah, yeah, yeah, soshould we crack on?
Should we get our guest on?
Yes, so we have Rachel with us.
Rachel Free.
Welcome to the podcast, rachel.

Speaker 4 (02:33):
Oh, thank you very much, Lee.
It's very kind of you to inviteme.

Speaker 3 (02:36):
Oh no, it's a pleasure to have you on.
Please don't feel nervous oranything like that.
Gwilym and I have done so manyof these now that it's second
nature to us and we don't um andah and things like that.

Speaker 4 (02:45):
So it's great to have you on, do I get 50p for every
um?

Speaker 2 (02:49):
Yes, yes, you do From Lee, yes, you do from Lee.

Speaker 4 (02:53):
Let's try.

Speaker 3 (02:56):
Oh no, no, 50p, and I was doing so.
Well, let let's crack on.
So for the listeners, who?
Who are you, rachel?
What do you do?

Speaker 4 (03:10):
well, I'm a patent attorney.
I'm a sepa member as well.
I'm vice chair of the computertech committee.
My background is ai.
I studied ai at university andI really enjoy patents.
It's so wonderful.
I'm so glad I became a patentattorney.
So I'm a partner in a law firmcalled CMS, which is an

(03:34):
international law firm, and it'squite unusual to have patent
attorneys in a law firm.
But I, there you are.
I said an um, so do I owe you?

Speaker 3 (03:44):
let's not get fixated on it.
Tell me a little bit about howyour interest in ai came about.
Was that, was that always whatyou were going to study, or did
it come about sort of furtherdown the track?

Speaker 4 (03:56):
I actually started off doing chemistry and then I
switched subjects and for one ofthe finals papers I was able to
choose my finals papers, andone of them was AI.
I absolutely loved it and thatled me to go on and study AI for
a master's and a DPhil.

Speaker 3 (04:16):
One of my favourite things about AI at that time was
representations how torepresent knowledge and how you
can do different thingsaccording to the representation
that you use and at that time soai is, variably across the
years being something that'sgoing to revolutionize the human

(04:37):
race, something that's reallyscary and that we ought to be
concerned about.
Something needs regulating andcontrol, uh, something that we
just need to allow to be likefreeform and organic.
That's a lot that I've justchucked in already, isn't it?
Where are you in?
The great AI is something to befeared or celebrated continuum.

Speaker 4 (04:54):
I'm both, so I think it's very powerful technology
and it's really going to improveour lives.
There's lots of things we cando with machine learning
technologies.

Speaker 2 (05:07):
Can I ask what your DPhil was about?
What's the kind of specificresearch topic?

Speaker 4 (05:13):
Well, do you wear contact lenses?
Because?

Speaker 2 (05:17):
it involves contact lenses.
Go on go on that sounds amazing.

Speaker 4 (05:21):
So I was interested in representations.
At the time there this uhresearcher called david marr and
he had a book um about thevisual system and the idea was
having different levels ofrepresentation, starting from
the light that falls on theretina and then building up

(05:42):
different levels ofrepresentation up to a 3d
representation of the world aswe perceive it and having
different neurons in the brainfor those different levels.
So my research was involvingvergence, eye movements and
stereopsis.
Vergence eye movements arewhere you move your eyes towards

(06:03):
each other and away from eachother.

Speaker 2 (06:06):
Yeah.

Speaker 4 (06:08):
That enables you to change the depth of the focal
point and helps you to be ableto see in three dimensions by
fusing the images from your twoeyes that are slightly displaced
from each other because oureyes are next to each other in
our head with a space betweenthem.
That ability is calledstereopsis.

(06:30):
So I had scleral contact lenseswhich are, it's, like a polo
mint shape, and the contact lensgoes on the white part of your
eye and inside it was embedded asmall coil of wire and you sit
inside a magnetic field withyour head on a bite bar so that
your head is still as your eyesmove.

(06:51):
The movement generates a smallvoltage in the coil of wire,
small current there, which canbe then recorded and magnified
in order to measure the eyemovements.
And then, whilst I was in thismagnetic field with these
contact lenses on, I was lookingat random dot stereograms that
I'd made, testing therelationship between those eye

(07:14):
movements and the ability to seein three dimensions.
And interestingly we found somedifferences according to how
the random dot stereograms, thelittle small elements that those
were made of, whether they wereblobs or edges or lines, and
that sort of tallied up withthis David Marr idea of having
different levels ofrepresentation actually

(07:36):
hardwired into the brain somehow.
It's interesting.

Speaker 3 (07:41):
It's really interesting and you don't know
this, but before you came on,Gwilym and I were talking about
colour and perceptions of colourand how you don't necessarily
see colour in the way that it'spresented in the real world.
Your brain will do things tohelp you understand colour and
it might look different todifferent people.

Speaker 4 (08:00):
Yeah, and there's many people who are colourblind
or people who have more colorrods in their retinas, so, like
pigeons, can see more colorsthan humans, and there are some
rare humans who have thatability as well.
It's incredible.

Speaker 2 (08:18):
This is getting interesting, isn't it?
Not what I was expecting at all?
Funnily, I've heard ofstereopsis, because I've heard
of chromostere stereopsis, whichis that weird effect where 2d
different colors on the screenlook 3d, like the netflix logo
looks three-dimensional to mebecause it's black and red and
your eyes do weird things withthose colors and slightly
different things.
But um, apparently, um, andrachel, let's, I love you, love

(08:40):
your default.
So presumably this is all to dowith machine vision and trying
to work out.
You know you had a training setwhich is the dots, and you had
your activities and everythingand you were using that
presumably to kind of teachmachines going forward or give
them some data about how theymight then do machine vision
around.

Speaker 4 (08:57):
But it it had, it hadn't encountered the
breakthroughs that came withimage net and so, although
machine learning techniques wereavailable, I didn't use them.
In my d phil.
I was more looking at therepresentations and how the eye

(09:20):
movements and how the humanvisual system was working.
But it really does lead to anamazing thing today where we
have these embedding spaces andthe representation in the
embedding space is just so coolbecause we can control that
using the neural network.
So it's an ability to create arepresentation that we have

(09:43):
control over how it's created.
So it's an ability to create arepresentation that we have
control over how it's created,using language models or
different types of neuralnetwork.
And then, once you've got thoserepresentations, you can do
things in the embedding space,doing work inside there, in an

(10:08):
efficient way, because thoseembeddings are very compact
representations, so it enablesyou to do computations
efficiently and join togethermodalities.
That's so exciting.
Being able to join modalitieswas something that I was
interested in when I wasstudying my DPhil.
So being able to link togethermodalities of touch and the

(10:32):
different senses that we havesight, smell how are those
linked together?
Embedding space, which is nowenabling language models and
image neural networks to jointogether and talk to each other

(10:53):
in the language of the embeddingspace.

Speaker 2 (10:55):
You've got a bunch of inputs coming in.
We should get to IP at somepoint.
You've got a bunch of inputscoming in to this embedding
space and the brain basicallyhas tons of context and
hardwired stuff that it does tolink those together and kind of
derive a much more coherentpicture from it.
Is that?
Is that roughly what you, whatyou're talking about here?

(11:15):
That?

Speaker 4 (11:16):
yes, you're filling in the gaps, being able to join
those modalities and then applyattention.
So being able to attend usingthe transformer architectures
really has advanced thefunctionality.

Speaker 2 (11:36):
Back to IP Lee.
Sorry.

Speaker 3 (11:38):
You carry on, Willem.
You're in the driving seat atthe moment.

Speaker 2 (11:43):
Okay.
Well, let's move forward towhere we're looking at AI in our
space, and you mentioned thatyou were listening to the um
ryan abbott um podcast that wedid recently where of course,
he's been very closely involvedin one particular area really,
which is the slightly boringlegal question of whether ai can
be an inventor.
But there are also theunderlying, rather more

(12:05):
interesting idea can I, can aibe an inventive entity?
I just wonder what yourthoughts were on that whole
dabos case and ai and where thatwent it's so wonderful.

Speaker 4 (12:17):
I love the right, ryan avid, and the podcast that
you did so fantastic.
And isn't it wonderful thatwe've had all the dabas case law
?
Because it's put society in aplace where we've had the
conversation.
And we've had the conversationabout can AI be an inventor?
And it's leading on now to thisquestion of AI assisted

(12:39):
inventions, where a humancreates an invention jointly
with an AI tool or using an AItool, and it seemed from the
podcast that that's a newquestion that Ryan's bringing to
the courts for them to debate.
So I'm really, really keen tohear about that, because my

(12:59):
personal opinion is that weshould be able to have patents,
no matter, because we want tofacilitate knowledge sharing.
That's really important in myview.
Patents is one way to achievethat.
Another way would be regulation, but that seems very heavy
handed.
Having more than one way toenable knowledge sharing would

(13:20):
be good.

Speaker 2 (13:21):
Yeah, I think patents are a nice compromise, aren't
they?
They kind of they rely on thehuman innovative creative urge
but just kind of hide away, theypull away the secret urge, keep
everything a secret as well.
I agree, I think it's a good.
I think Ryan went on a bitabout regulation actually, which
is always interesting to stepthat far back into the kind of
policy level.

(13:42):
And do you think AI can invent?
Because we hear differentstories from different people on
this.

Speaker 4 (13:47):
Yes, I think it can.
I think we found that when umalpha go played against the, the
world's best go player, and ummade, was it move 36 or whatever
.
That was an invention, in myopinion I think it's excluded
because it's a.

Speaker 2 (14:03):
It's a method or scheme for playing a game, but
I'm sure I'm sure it's excluded,but yeah, yeah, but yeah,
that's more regulatory of aproblem.

Speaker 4 (14:09):
Yeah, yeah, yeah there's still an invention, even
if it's excluded.
Some inventions are excluded,fair point fair point, fair
point.

Speaker 2 (14:18):
Yeah, yeah and and yeah, the the ai was the actual
divisor true, true the other bigcase obviously it was knocking
around in the uk at the momentis the emotional perception
which, which I think has justhad, leads to go to the Supreme
Court, as I saw.
Yeah and again.
Rachel, you know we love anexpert opinion.
Do you want to kind of veryquickly summarise that and what

(14:40):
your thoughts are on it?

Speaker 4 (14:41):
So the emotional perception case is a neural
network case and as far as Iknow it's a recommender system.
In general, at the EuropeanPatent Office it's very
difficult to get granted patentsfor recommender systems, and so
the UK IPO in the firstinstance had refused the
application, and then at thenext instance the High Court

(15:08):
made the opposite conclusion anddecided that because there was
an artificial neural networkinvolved in the claim, that this
made it the stuff of patents,and a patent should be allowed
potentially as long as the othercriteria for novelty and
inventive step were met.
This is now going to go to thecourt.

(15:30):
It went to the Court of Appealand they decided it was back to
the same situation as the IPO,but with a slight difference,
saying that even a hardwareneural network would not be
patentable, which is goingdifferently from the status quo

(15:52):
in the original situation.
And then now this is going tobe appealed to the Supreme Court
, which is wonderful newsbecause we will get legal
certainty and it gives anopportunity for the Supreme
Court to give their opinion notonly about this particular
situation of the artificialneural networks for the

(16:13):
recommender system but also morebroadly for computer
implemented inventions andperhaps also for the way
inventive step is assessed inthe UK and the fact that that's
different from theproblem-solution approach.

Speaker 2 (16:29):
Gosh, that'd be interesting if it went that far
Last time.
The last one I'm aware of wasthe Davos itself.
I think it was certainly on AI,and that was not quite as
glamorous an outcome as I thinkpeople were hoping for.

Speaker 4 (16:42):
Yeah, but it's going to be very exciting, isn't it?
Perhaps in 2026, we might havethe decision.

Speaker 2 (16:50):
Gosh, if AI were running it it'd be a lot quicker
, that's for sure.
I was just going to ask Rachelwhere you see AI kind of
impacting on your practice interms of client, not AI as a
tool in terms of kind ofAI-related work.
Are you seeing a lot coming in?
Where are the hotspots forclients?

Speaker 4 (17:10):
So there's lots of work happening in machine
learning and AI and agentic AI.
Especially at the moment,agentic AI is really a hot topic
.
This is where you have lots ofindividual AI models that are
able to work together andachieve something with synergy

(17:31):
that's better than some of theindividual parts.
So that's a very powerful idea,and I was at a lecture recently
given by an AI professor,michael Zhu, and he was saying
that if we have an agentic aisystems, they can form an ai

(17:52):
society and that this might leadto having, um, traits of that
society which are something akinto consciousness.
It seems a very interestingidea to me.
He was saying thatconsciousness we could see as

(18:12):
perhaps the ability to havememory of attention as well as
attention.
That was something I hadn'treally thought of before.
It's certainly having AI agentsis going to be something that's
going to improve all our lives,because we'll have agents that
can do things for us.
Like we might have one thatdoes the grocery shopping and it

(18:35):
would be able to find out, youknow, do a meal plans, and it
would be able to act with someautonomy, um, and you might have
another AI agent that was maybeyour fitness coach or one for
coaching you at work or one tohelp you with financial

(19:01):
management, but these are justsome ideas that it's difficult
to see into the future.
My colleague said to me ohRachel, what could we do?
I want to try and make abenchmark for how I can compare
the AI progress in differentjurisdictions in terms of AI

(19:24):
regulation and as well as thetechnology, and I was like like,
well, I thought about that forquite a while and really I was
thinking actually it's, it'swhat, it's what the AI can do
for you, what it's permitted todo.
So, for example, if, if youhave an AI agent, if it's
allowed to do the shopping foryou on its own without your

(19:47):
say-so, or if it's what thingsit's allowed to do the shopping
for you on its own without yoursay-so, or what things it's
allowed to do in thatjurisdiction.
Could it be a citizen?
There's a sort of a scale ofthings, and perhaps that would
be the way to do thebenchmarking across
jurisdictions.
But it's definitely aninteresting thing to think of
consciousness as being a traitof an agentic AI system.

(20:08):
Think of consciousness as beinga trait of an agentic AI system
.
It made me think.
Actually, do you think, gwilym,we could have a patent for a
method of giving a machineconsciousness.

Speaker 2 (20:17):
In terms of would that be something that the
patent office would accept,given if you substantiated how
to do it?
I think so.
I don't see why not.
I always wonder where you youstart wondering about what you
use the patent system for.
At that point, though, youthink if you did get that one,
you probably wouldn't open thepublic domain.
Sadly, if you got there first,you keep it to yourself because
it's not far off.
A method of being god, which isa fun topic, lee what about

(20:42):
article 53 um european?

Speaker 4 (20:45):
parents shall not be granted in respect of inventions
, commercial exploitation ofwhich would be contrary to
morality.
I think, it's a great question,I think, the morality.

Speaker 2 (20:58):
I think the most wonderful thing you could give,
the most moral thing you coulddo, would be to give something
inanimate, a conscience that'svery moral, the opposite of
immoral.
It's lovely, but how would wetell?
I've got to close and leave bythe way I was going in the same
way.

Speaker 4 (21:14):
Artificial general intelligence.
We're on the way to it.
When we reach there, how willwe know, and what would be the
difference between?
What is it that makes us humanand different from a machine
that has got artificial generalintelligence?
Very different, very difficultquestion, well isn't?

Speaker 2 (21:32):
it funny how quickly you shift into philosophy in
this whole area, and I think theturing test is now is now met,
do you think so, wasn't it?
Very simple test.
If you can't tell thedifference, I can't tell if
something I I was marking somepapers, I won't say where,
recently, and the only way Icould tell between the ones that
were almost certainly humangenerated and the ones that had
a bit of ai help, shall we say,was the ones the ai help were

(21:56):
better, just so.
It's kind of an exceeded, Ithink, because it was really
difficult.
The only reason I could tell itwas probably ai generated was
because it was just so good andso articulate and so accurate
from a bunch of people who werejust moving into the field that
I thought, yeah, it's too good.
So it's as if the Turing testhas been flipped on its head.

(22:17):
The only way to spot now isit's better than humans.

Speaker 3 (22:19):
Moving that forward, and this is more the
philosophical side of things.
We don't need to discuss this.
This is one of the answers tothe questions.
Ai will know that it's doingdamage to the world and will
stop itself from doing damage tothe world because it's beyond
human intelligence.
We're humans, though, that aredoing damage to the world, and
they just carry on.

Speaker 4 (22:36):
Yeah, that's very profound, you know it is.

Speaker 3 (22:40):
I had to get in somewhere.
I was needing to get into thisconversation.

Speaker 2 (22:43):
No, there's a distinction between, I suppose,
consciousness and consciencethere.
But there's a distinctionbetween, I suppose,
consciousness and consciencethere Big time isn't there, and
conscience is an extra layer ontop, isn't it?

Speaker 3 (22:52):
I think you have to build that in.
Can I bring us back to moremundane patenty type things?

Speaker 2 (22:56):
Oh, go on then.

Speaker 3 (22:59):
So you may or may not remember this, rachel, but I'm
unminded of it because it wasalmost 10 years ago, so 2015,.
We had a public debate.
Cp had a public debate on AI.
So 2015 we had a public debate.
So you've had a public debateon ai.
And we had a bunch of patentattorneys.
We had a bunch of scientists,we had members of the public
about 500 people in the sciencemuseum, in the imax theater,
there, and we asked the question, or posed the question, should

(23:22):
I say.
This house believes it isinevitable that within 25 years
so now 15 years on a patent willbe filed and granted without
human intervention.
And we had two futuristsChrissie Lightfoot, who's a
lawyer but now perhaps morewell-known for her work around
AI, the naked lawyer.
And Callum Chase, the author ofone of the best books I've ever

(23:46):
read, pandora's Brain.
And then we had a patentexaminer and a patent attorney
probably shouldn't name them whospoke against the motion.
We never, ever, reached adecision because from the floor,
we had the bizarre questionwell, you know, the proposition
that's been put to the floordoesn't say whether that patent
would be any good or not.
And we then got lost in thedebate about whether we were

(24:09):
actually talking about goodpatterns or bad patterns, but we
did at one point during thattalk about AI's role in
examining in inventive step inthose processes that are
involved in proving theinvention, and I know that
you've done work on that, rachel, so I didn't know if you wanted
to say something about whereyou are particularly around
inventive step, but moregenerally, the use of AI in the

(24:33):
patent office side of theequation.

Speaker 4 (24:36):
Yeah, I think it would be lovely if we could have
some AI tools that would helpwith examining things like
Inventive Step, although itshould be a human that makes the
final decision, which is basedon the pre-search from the EPO,

(24:59):
where the patent applicationwould come in, go through the
language model and be convertedinto an embedding, and then you
would look in the embeddingspace to see if the volume in a
certain radius is empty.
Well, this embedding space isone that's been pre-populated
with the prior art and if thatvolume is empty for far enough,

(25:25):
you could somehow say that thispatent application is
groundbreaking and thereforethere's an indication of
inventive step.
So it's not the problemsolution approach, because it
doesn't involve combinations ofdocuments, but it's an indicator
that could be doneautomatically and could help

(25:51):
examiners in their assessment ofinventive step.
So I think that would be quiteuseful, although lots of people
have said to me oh, it's no goodbecause it doesn't consider
combinations of documents.
We could try to look atcombinations of documents in
that embedding space by addingand subtracting vectors in the

(26:14):
space, but I'm not sure thatthat would work.
So I think it comes back downto looking at the volume and
seeing if the volume is empty,and also assuming that that
space this is a space ofinventions that it would be
somehow continuous and that wecould take account, or we would

(26:35):
have a map showing the areas ofexcluded matter, the excluded
matter holes in the space.
We would know where they were,so we could take account to
avoid those.
And perhaps, after we'd donethis indicator test, then also
do a follow-up to apply theproblem and solution or check

(26:57):
that there was a technicaleffect there so that we'd
avoided any excluded matter.
Yeah, but in the end I think itshould be a human that makes the
decision, and that's becausethe case law influences.
Law, is reflexive, so societycreates law and law influences

(27:20):
society.
And because the case law, evenin patents for words like
technical, are not defined.
And that's on purpose.
Because we can't definetechnical, because technology is
constantly advancing, sotherefore it has to be defined
through case law.
And if case law is set by AIadjudicators, there's an
immediate problem.
So we have to have humansmaking the case law as far as we

(27:44):
can.

Speaker 2 (27:45):
I was going to pick up on a couple of points there,
actually, because again, there'sbeen lots of conversation.
Ai can invent.
Ai becomes a skilled personthat's cleverer than human
skilled person, which changesthat bit of the test which would
be, I guess, part of the storytrying to manage that story here
.
But there's another point forme on that the word knows
intelligibility.

(28:05):
We there's a lot of work, isn'tthere, on making sure that ai's
decision making can beunderstood, that you know why
it's doing what it's doing, andI guess a risk would be, if it
was, if a lot of the stuff wasthe kind of the vector space
kind of layer that, as you say,we wouldn't be able to work out
how justice had been done, as itwere, in terms of identifying

(28:29):
whether there's an event of stepor not, because it was hidden
in a bunch of nodes and neuralnetwork steps that aren't human
comprehensible.
Is that something that youthink could be a concern?

Speaker 4 (28:39):
Definitely yeah.
I think what you're saying isthat the, the language model
that generates the embedding ofthat, represents the patent
application, is not.
We can't explain in a humanunderstandable way how it made
that embedding.

Speaker 2 (28:55):
Yeah quite, quite, quite yeah yeah yeah, yeah.

Speaker 4 (28:58):
Yes, that is definitely a concern, but I
guess because we're able to test, we've got a huge body of
supervised training examples,because we've got the whole of
the granted patent literature,so there's plenty of scope for
testing, to test if the, if it'sworking sensibly or not have I

(29:24):
told you about the short storycompetition I won?

Speaker 3 (29:27):
if you did.
I can't remember.
Well, I'm sorry and I feel itis relevant.

Speaker 2 (29:33):
It is relevant because, um, I think, I think I
was the only entry, which isalways a good way of enhancing
your chances, and it was for theI think it's american bar
association ip you've never toldme this.
You've never told me okay, theywanted an ip short story and,
rachel, I don't know if you readyour science fiction, but I'm a
big sci-fi fan and one of myfavorite books is neuromancer,

(29:54):
um, which of course was theultimate cyberpunk defining
novel back in the 80s.
Um and I wrote a story about aiinventing stuff and then being
examined by ai uh examiners, um,but it was all done in
cyberspace and so they had thismassive kind of um, funky video

(30:15):
game sort of fight, but usingnovelty and inventive step as
their weapons.
And for some reason I think itmust have been the only entry,
because, unless you're readingyour romance and you're a patent
attorney and you you're me, Idon't think it made any sense to
anybody.
But I did have this idea thatwe'll end up in a situation
where ai is inventing stuff, aiis examining stuff, it's all
getting done in real time andwe're just sitting there seeing

(30:37):
patterns being spewed out withno idea what to do with them,
which, weirdly, then comes backto the question of conscience,
because if ai does get to thispoint where you know, if we can
automate all these things, howdo you make ai continue to
invent things that are actuallyuseful for humans?

Speaker 4 (30:50):
well, that's easy to do.
So you you have.
So at the moment when you youdo reinforcement learning, um,
you create a neural network thatbuys a reward function.
So you would have a um.
You create a machine learningmodel that was trained.
You could have one bespoke forguillem that knew about your

(31:12):
preferences and get the resultsfrom the other model.
And it would go will guillemlike this yes or no mark out of
10 and then send it.
Send that result back as areinforcement learning I quite
like that.

Speaker 2 (31:30):
Yeah, can we start?
I'm very happy to be thetraining model.

Speaker 4 (31:32):
Not a problem, not a problem yeah, have you heard
about this colossus?

Speaker 2 (31:38):
I don't know.
No, no, no, it's amazing.
No.

Speaker 4 (31:42):
I haven't.
It's going to be like 100,000GPUs and that is just
mind-boggling to me.
And apparently it's been builtand it's going to even increase
more in size.
It's just phenomenal to evenincrease more in size.
It's just phenomenal.
So the models, that the size ofthe machine learning models that

(32:05):
could be trained will be vastusing this type of scale,
because at the moment, thegeneralization ability of the
machine learning is what reallymakes it powerful.
So you train the model usingsome examples and then you can
give a new example that themodel's never seen and, because

(32:28):
it's able to generalize itsknowledge from what it learned,
can give a good answer.
And usually when you make themodel bigger by giving, like
having more GPUs and making themodel bigger, you would think
that it will just learn thetraining examples exactly, which
is called overfitting.
Yes, and very strangely, thathasn't happened yet, even though

(32:55):
the sizes of the models areincreasing and increasing and
increasing and increasing.
Sizes of the models areincreasing and increasing and
increasing and increasing.
So, uh, they're getting biggerand emergent behaviors here
ability to do, think to, toperform things that it was not
trained to do, and and socolossus that's that lean yeah,

(33:16):
it's just amazing, oh thanks soright, rachel, one of my jobs on
the podcast is to keep half anhour of the time, and I'm
conscious of it.
Oh, it's half an hour.

Speaker 3 (33:25):
We're lurching towards an end and I know that
we've not talked much aboutregulation and I know that's
another area where you've beenactive.
Do you want to give us yoursort of like potty overview on
where we are with regulation andwhere we're likely to go in the
future potty?

Speaker 4 (33:40):
overview on where we are with regulation and where
we're likely to go in the future.
Yeah, I think regulation isimportant for us patents
attorneys because it impactsability to detect infringement,
it impacts commercial ways thatour clients can use their
technology, and we have toadvise our clients holistically.
Not just can you protect thiswith patents holistically, not
just can you protect this withpatents.
Think of more, everything inthe round, and that includes

(34:03):
regulation, especially becausewith powerful technologies come
responsibilities and thoseresponsibilities, increasingly,
are going to be regulated.
We have the EU AI Act, whichbegan in August.
We have the EU AI Act, whichbegan in August, and already

(34:30):
different countries areimplementing that in their own
laws, and Italy is one examplewhere there's been a need felt
in Italy to make even strongerregulation.
Stronger regulation so therecan be penalties of going to
prison for not complying with AIregulation potentially in the
future in Italy.
And it does seem like AIregulation is going to happen in
the UK soon.

(34:51):
There's been some pressreleases about that and saying
that the AI Safety Institutewill become a regulation body as
opposed to being somethingthat's just looking at AI safety
and is not doing regulation.
So regulation is going to bepart of our lives, for all of us

(35:15):
working in AI fields, which ispretty much everyone, because
it's going to be in all sectors.
So it's interesting becauseregulation is like you must do
this, you must do this.
It's very different from oh, ifyou do this, I'll give you an
incentive, I'll give you anincentive if you do this.
Regulation is like you must dothis.

(35:36):
So, whereas I think patents aremore of the incentive line of
things and that therefore theycan be useful, and often we
think of patents as only beingan incentive for creating
innovation, whereas there's manymore purposes, including
helping us, helping shareknowledge about technology.

Speaker 3 (35:59):
But, rachel, we've asked you lots of questions.
Are you sat there thinking, oh,got away with it.
They didn't ask me that one, ordamn.
I've had the opportunity to saythis Is there anything else you
want to add, as we kind ofhurtle towards the close of the
podcast?

Speaker 4 (36:14):
I could tell you about my funniest AI.
Oh yeah, it's so funny.
I think it was dave's garage.
It's like this video on youtubeand it's this guy he's.
He's made an a chat bot ofhimself, his own voice, and so
that when telemarketers phonehim, it answers the phone and it

(36:37):
has the goal of prolonging theconversation, but never coming
to a final conclusion or givingany instructions or agreement.
And the telemarketers phone upand you hear the conversation
with the bot and it goes on andon.
It's just very funny.

Speaker 3 (36:57):
I've always done that in real life.
In fact, we had a telemarketerring the office just earlier
today and I had my EA, charlotte, tell the person that we
weren't interested it's not thatwe're interested in and then
Charlotte looked at me and shesaid he's asked to speak to my
boss.
So she put him on and he askedme whether I had a good day.
So I started telling him aboutmy good day and he said no, hang

(37:20):
on, I'm trying.
No, you've asked me if I've hada good day.
I'm going to tell you about myday and he hung up.
So it does work.
It does work.
Uh, rachel, thank you so muchfor coming on and sharing your
um, your knowledge and yourexpertise, but also your um sort
of ending with a uh, light,light-hearted, um example there.
Guill, you said to me earlierthat you might have a closing

(37:42):
question.
Do you have a closing question?

Speaker 2 (37:45):
I do.
I do Actually just for opinion.
If you're counting, I only gotfour ums, lee, so you owe Rachel
two pounds and I'm going to.
You also said continuum, butI'm not going to count that.

Speaker 3 (37:58):
Is that?

Speaker 2 (37:58):
where I can't stop umming.

Speaker 3 (37:59):
Is that the um, um, but I'm not going to count that.
Is that where I can't stopumming, is that?

Speaker 2 (38:02):
the um Um.

Speaker 4 (38:04):
All the listeners now will be checking you, gwilym.
You do realise this, don't you?

Speaker 2 (38:08):
And they'll be writing in complaining if it's
not right, I know, go on then,let's just.
I can only commend you on that.
The continuum is a never-endingum.
That's a beautiful concept.
No, my closer was um a reallysimple one, which is that if you
could give an inanimate objectconsciousness, what object and

(38:33):
why?
Rachel, you're up after lee, sowow what a what a stonking
question.

Speaker 3 (38:39):
Bizarrely, it's not something I've ever given any
thought to before.

Speaker 2 (38:44):
Walt Disney did it, didn't they in that Mickey Mouse
early film dancing broomsticksand things?

Speaker 3 (38:49):
Yeah, so maybe I don't know.
This is, I've no idea, gwilym,so I'm just going to pick
something at random A cricketball.
Hit it A cricket ball, and maybewe could then finally come to
understand the mystery of swingand reverse swing, because it
doesn't always follow any kindof physical laws.
Sometimes the ball will swing,sometimes it will reverse swing,

(39:12):
sometimes it's aboutatmospheric conditions,
sometimes it's about the pitch,sometimes it's about whether
you've applied anything to itand rubbed it frantically.
It's about whether you'veapplied anything to it and
rubbed it frantically.
So, yeah, I would animate acricket ball so that you could
tell me the the mysteries ofswing bowling for reasons only
if I couldn't think of anythingelse no, that's really good.

Speaker 2 (39:32):
I I would love.
It's a bit cruel, but I'd loveto hear the cricket balls in the
monologue through the bowlingbatting cycle.

Speaker 4 (39:37):
Anyway, um make sure they give you, give you some
thinking time there, perhaps themoon, I think the moon because
it can see um the earth and itcould probably tell us things,

(40:00):
have some very useful insightswe can find that it's actually
populated by aching drum, whichis what I've always told my
children as well but, um so youobviously asked this question
because you've got an answer.

Speaker 2 (40:12):
We know how this works well, I don't have more
time to think about the answer.
Actually, I want something.
I want to hear its memories.
Um.
So, rachel, you want to knowwhat the moon can see.
I want to know what the riverthames can remember I reckon how
awesome would that have youever?

Speaker 3 (40:27):
read the novel series rivers of london bits and
pieces of it.
Yeah, that's yeah, not, not thesame, obviously, as its
memories, but yeah, it would befascinating, wouldn't it, to
know what something that's beenhere since the dawn of time
would be able to tell us.

Speaker 2 (40:39):
Yeah, there we go, gosh, that's been really good.
It wasn't really much for AI,but it's very good fun.

Speaker 3 (40:45):
Rachel, thank you so much for coming on.
Gwilym, thank you for workingwith me on two podcasts in one
day.
That's quite exhausting.

Speaker 2 (40:53):
On a range of topics.

Speaker 3 (40:54):
Thank you to the listeners, because without you,
we don't have a podcast, and youcould help us generate far more
listeners if you just leave usa little review on the
podcasting platform of yourchoice, so more people find us.
Alternatively, gwilym and I aregoing to go away and try and
find some kind of AI bot thatwill do that for us.
That's our task.
Unbelievable On it.
On it.
Thanks both.
Thank you, we'll see you nexttime.
Advertise With Us

Popular Podcasts

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

24/7 News: The Latest

24/7 News: The Latest

The latest news in 4 minutes updated every hour, every day.

Therapy Gecko

Therapy Gecko

An unlicensed lizard psychologist travels the universe talking to strangers about absolutely nothing. TO CALL THE GECKO: follow me on https://www.twitch.tv/lyleforever to get a notification for when I am taking calls. I am usually live Mondays, Wednesdays, and Fridays but lately a lot of other times too. I am a gecko.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.