All Episodes

July 8, 2025 26 mins

I spoke with Professor Richard Susskind CBE KC (Hon), President of the Society for Computers and Law and author of many notable books, including his most recent publication, How to Think About AI - A Guide for the Perplexed. We discussed strategies for balancing the benefits and risks of artificial intelligence, how technology can help legal professionals better serve their clients, and ways that leaders in the legal field should approach AI to drive their organizations forward.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:01):
Welcome to Reinventing Professionals,a podcast hosted by industry analyst
Ari Kaplan, which shares ideas,guidance, and perspectives from market
leaders shaping the next generationof legal and professional services.
This is Ari Kaplan, and I'm speakingtoday with the legendary Richard

(00:24):
Susskind, president of the Societyfor Computers and Law, and author
of many noted books, including hismost recent publication, How to Think

About AI (00:33):
A Guide for the Perplexed.
Richard, wonderful to see you.
Great to see you again.
How are you?
I'm doing great.
This is a significant privilege.
It's been many years since we'veseen each other, and I just feel
so lucky to speak to you today,so thanks for taking the time.
No, not at all.
It's funny, although we haven't seeneach other so recently, because we

(00:54):
interact online and people online,you feel you're in contact far more
frequently than perhaps is the case.
For the handful of people who may notrecognize your name, can you tell us about
your background, and why you wrote How toThink About AI: A Guide for the Perplexed.
In the early eighties, when I was alaw student in Glasgow University in

(01:16):
Scotland, I became interested in AI.
I was looking at how computers mightaffect the judicial process, and
I became hooked by this subject.
I went to Oxford and did a PhDfrom 83 to 86 on AI and law.
In the late eighties, I co-developedthe world's first AI system for lawyers.
So the beginning of my career wasvery much focused on what we would

(01:38):
now call first-generation AI systems.
And I've spent decades really thinkingabout the way that technology can be used
in law and other professions, and waitingfor the moment when technology would
catch up with my hopes and aspirations.
What we've seen over the last few years isquite remarkable, stunning advances in AI.

(02:00):
I think I've seen more progressin the last four years than
I've seen in the previous 40.
Cause it has been, I'm afraid,44 years in this field.
And I was struck when I've been reading somuch and hearing so much about the field,
that a lot of the debate is dominated byhugely impressive technologists and tech
entrepreneurs, but they're not speakingthe language of the ordinary person.

(02:21):
I think a lot of the books andarticles that are being written
are quite technology-oriented.
So I wanted to have a goal for the firsttime at writing for the general reader
and trying to explain the impact of AI.
The social impact, the economicimpact, the legal impact, the
ethical impact, and in so doing, givesome sense of what this remarkable
set of technologies is all about.

(02:43):
So it's a new departurefor me in later life.
What are the key takeaways that youwant readers to ultimately understand?
The big message of the book isthat there is no single agreed
view on where AI is taking us.
So when you hear people dogmaticallyinsisting that AI is doing X, Y, and

(03:06):
Z in the 2030s or 40s, I think weshould treat that with great caution.
Indeed.
I suggest there are six differenthypotheses that come from the AI
community about where AI might take us.
So, what I'm encouraginggenerally is an open-mindedness.
I find a lot of dogma andinsistence in this field of AI.
I think we need to be far more relaxedand try to understand that this is

(03:29):
an emerging phenomenon of monumentalsignificance, and not jump too quickly
to conclusions, so this is part ofthis idea of how to think about AI.
A very specific question, though secondlyemerges from the book, and that is "What
if AGI, and AGI is artificial generalintelligence - systems performing what we
might say are all the cognitive tasks ofhuman beings - can match our performance

(03:54):
right across the range of things we do?
And if you'd asked the AI companiesbuilding these systems before chat
GPT how long do you think before AGI,most would've said 20 to 40 years.
If you ask them that questiontoday, they'll say five to 10 years.
So it seems to me wholly remarkable thatmost intelligent people, most leaders in

(04:18):
government and businesses, are giving noserious attention to the idea the AGI may
well come within the next decade, and itwould, in my view, be the most single,
significant event in the history ofhumanity if we develop systems that match
human performance intellectually speaking.
So they're the two messages.

(04:38):
One is about opening your mind and seeingthat there's no single future for AI.
We're feeling our way and to approachthe field and exploratory and I think
curious way, rather than a dogmatic way.
Secondly, there's a very serioussingle question emerges from the book.
What if AGI,
despite the complexity of thetopics, I found that the outline of

(05:04):
the book is almost written in a waythat teams can digest it in a series
of strategic planning meetings.
From the outline of what it is tohow it can be used to the risks,
to the ethics associated with itwas that framework intentional?
Yes and no.
The reality is that AI throws upall sorts of different issues.

(05:24):
It throws up business issues, itthrows up legal issues, throws
up regulatory, regulation issues.
It throws up historical issues.
And what I thought I would do isjust take each and turn what emerges.
And it wasn't really intentional.
It was intentional to break it upinto different subject, but what
it emerges is a series of modules.

(05:45):
And so quite a lot of it can beread in isolation from the rest.
So if you're interested in risks, youcan go to one chapter and it says, here's
the second seven category of risks.
And if you want to know more, you cango to the next chapter and tell you
here's how we might manage these risks.
But you could read that withoutreading the earlier modules.
Obviously, as an author, I'd lovepeople to read it, cover to cover.

(06:07):
I am terribly conscious that peopledon't seem to read books from cover to
cover anymore, so I want it to be thekind of book that people could pick up
and in half an hour, read a chapter,absorb something of significance and
importance, and hopefully help to them,and then maybe put it down for a couple
of weeks and pick it up at a later stage.
In more practical terms, though.

(06:27):
Reflecting on my consulting work thatI do with big professional firms, I
also wanted to be able to say, look,you can read two or three chapters
in preparation for this meeting.
You don't need to read an entire book.
So yes, it's written modularly, whichto some extent reflects the fact
that it draws on and contributesto various different fields.
And even towards the end, the lastthree chapters are quite philosophical.

(06:50):
So I ask the question about whether or notmachines are or could ever be conscious.
I asked the question about what itwould mean to have AI generating
very lifelike virtual worlds.
And I asked the question of theobligation we owe to future generations
of humans in relation to AI.
Should we be putting the brakeson, or should we be developing

(07:12):
them at a more advanced rate?
These are all big issues, but what I'vetried to do is compartmentalize them
and give them in bite-sized chunks.
Which section was easiest to writeand which was most difficult?
The most challenging bit was myfavorite chapter - i've never really
had a favorite chapter before - whichis chapter 10, Unconscious Machines.
What I'm really doing thereis drawing on my background in

(07:34):
philosophy as well as law technology.
And it's really drawing in a lot of thethoughts I've had over the last 40 years
or so, and also my reading of greatphilosophers over the years to try and
address the issue of human consciousness.
Draw out some lessons for therein terms of machine consciousness.
Speculate about future states of machines.

(07:56):
Think a little about animal consciousnessand so forth, and bring it all together
in about 25 pages that I hope peoplecan rattle through and think, wow,
that's covered a lot of ground.
He hasn't solved the problems, butat least he's clarified the problems.
Trying to distill all that quitecomplicated thinking and very
complex ideas into somethingthat's quite straightforward to

(08:17):
read, I find very challenging.
It's easier to leave somethingcomplicated than make it simple.
So my emerging mantra when I'm tryingto help other people who are writing
is have the confidence to write simply.
I've tried to have that confidence.
The easiest bit for me was the beginning,what I'm telling the history of AI 'cause
to some extent, I was involved with it.
I started in the eighties.
Of course, it predates me, itgoes right back to 1950 in my

(08:39):
view, but that was all in my head.
It was a question of essentially justsitting down and, as elegantly as
possible, trying to tell this history in away that would be engaging and memorable.
I think it is quite important to tell thehistory 'cause there's a lot of people
who somehow seem to believe that AI wasborn in late 2022 when ChatGPT came out.
Now it is fully and entirelyremarkable, but it wasn't the first

(09:04):
chapter in the story of AI, andit's not the last chapter either.
It's the latest chapter.
To put it in this broader contextactually gives us all a bit of humility,
but also a sense of the trajectory.
That's one of the running themes of thebook, that in the early days of AI in
the fifties, sixties, and seventies, wesaw breakthroughs every five to 10 years.
We're now seeing breakthroughs,not necessarily technological,

(09:25):
but in application term,every six to 12 months now.
In telling the story, you'vegot a sense of the way in which
progress seems to be accelerating.
You write that balancing the benefitsand threats of artificial intelligence
is the defining challenge of our age.
How should readersinterpret that conclusion?

(09:47):
I say that saving humanitywith and from AI is also the
defining challenge of the day.
There's an underlying point here,which I've been trying to get across
in a lot of my talks, is that it'sokay to believe that AI is both
potentially a force for the goodand potentially a force for the bad.

(10:09):
That it could, and, I believe, willhelp overcome some of our greatest
challenges, the global health crisis,global education crisis, global
access to justice crisis, globalclimate crisis in one way or another.
I believe that AI systems will helpus advance our solutions in each of
these vital areas, so that's great.

(10:30):
However, at the same time, as Iforce myself to write about the risks
and categorize them and classifythem and pondered them, I really
viewed them as a mountain rangeof obstacles and vulnerabilities.
People who are thinking about AI,who're quite new to the field,
often feel they either need - likea team - to support AI or not.
But it seems to me it'snot like that at all.

(10:51):
You have to hold twothoughts in your head.
Here is something that could begame-changing in a positive way for
humanity, and also a negative way.
So that's why we need to be balancing.
Now, historically I think it'sfair to say most of my work and
certainly in law and the professionswas more focused on the positive.
So some people, even reviews havekindly said, gosh rich is writing

(11:13):
more negatively now, but it's justa different focus of the book.
I couldn't write a book that was meantto help people think clearly about AI and
not say, there's a downside here, folks.
Nor did I want to write a book that wouldbe dominated by doom and gloom because
I think that would also be one-sided.
And so I think there is a balancethere that maybe you don't see in some

(11:35):
other writings, works, and research,where people have seemed to want
to side on one school or another.
Those who are fanatically in favor andthose who are viscerally in opposition.
I actually also appreciated the dualperspective, having written a book called
The Opportunity Maker and another onecalled Reinventing Professional Services.
I'm an eternal optimist.
Thinking about this and your perspectiveon showing us, the practical, the

(11:58):
future, what to be thoughtful aboutI think is crucial in this kind of,
certainly in professional services,which is the area that I focus on.
You know what it's like when youwrite a book, where you're getting
feedback from people that comesin a whole bundled different ways.
You get far more reviews, you getcomments on social media you have
conversations with people, andthe nicest thing is actually just
getting emails outta the blue frompeople who've read and enjoyed it.

(12:19):
I think this point about allowingyourselves to have these concurrent
and different perspectivesseems to be landing quite well.
You share a memory of being at aneuroscience conference and making
the then-controversial statement toneurosurgeons that patients don't
want neurosurgeons, they want health.
And obviously that's true inmany professions, including law.

(12:42):
What is the best way for us aslegal professionals to reframe our
thinking to better serve clientsas our roles continue to evolve?
I like telling that story because, onthe one hand, it's an example of what
economists call task substitution.
And when most economists are writingand thinking about AI and management

(13:05):
consultants too, what they tendto do is look at a job and look at
all the various tasks involved, andthey think, where can we take our
human, plug in a machine instead?
That's a very common conception of AI.
And indeed, a lot of the quiteinfluential reports on the impact
of AI do a little more than that.
They look at how many tasks theythink could be taken on by machines,
and on the basis of that gives somekind of percentage likelihood of

(13:28):
your job existing in the future.
I think that's rather superficial, andmy theme that day with the neurosurgeons,
who I think were inviting me to speakabout robotic surgery, was that surely,
there's two other more importantaspects to healthcare in the future.
One was non-invasive therapy, andthe other was preventative medicine.

(13:51):
So I was saying, surely in 30 or 40years people look back and say, it's
unbelievable we used to cut bodiesopen, soften because we'll have
AI-based tools, which will help inmy view and in their estimation too.
And the idea has gotten quite a lot ofsupport across the medical world that
the focus won't be simply unpluggingthe robot in and removing the surgeon.

(14:13):
The focus will be in this idea ofnon-invasive therapy, because that's
the kind of outcome people want.
They want something that's lesspainful, that's less invasive.
They want something that's quicker,cheaper, less forbidding, and so forth.
So I think there's a lesson there forlawyers that don't simply think about
how can we automate some of the tasks.
So we look at something that a juniorlawyer does, document review and

(14:35):
litigation, due diligence, basic legalresearch, or initial contract drafting,
and we say we can plug a machine in.
What we're doing is simply automating.
We're simply grafting technology ontoour old ways of working, and that's
gonna be fine for a couple of years.
But when we're thinking oftransformation of legal services, it
won't simply be more a turbochargedversion of what we have today.

(14:57):
AI will afford us in ways that haven'tyet worked out, and everyone's trying
to work out will afford us a form oflegal problem-solving, legal service
that's more like non-invasive therapy.
That's our challenge.
How can we find ways of giving clientsthe legal outcomes they require,
but delivering in a different way?

(15:18):
You'll remember, the pasts used totalk about people don't want power
drills, they want the hole on the wall.
It's the same message.
Our focus should not be onan AI based power drill?
It's our will aI enable us to deliverpower holes in the wall in different ways?
The final element of the story aboutneurosurgeons is the idea of preventative
medicine, and there's a very strongmessage here, in law, preventative

(15:39):
lawyering, legal risk management, puttingthat fence at the top of the cliff
rather than the ambulance at the bottom.
I've never met a chief executivewho prefer a really big dispute well
resolved by a large team of lawyersto not having a legal dispute at all.
So, problem avoidance or indeeddispute containment should be a focus.
And I think, again, we'll developAI tools that can support this.

(16:01):
So the fundamental message tolawyers is although I understand
this passion over the next fewyears to get productivity gains and
efficiency gains from essentiallylayering AI onto legal practice.
The future of legal serviceactually is not empowering lawyers.
It's empowering peoplewho are not lawyers.
It's helping organizations and individualsundertake legal work for themselves.

(16:24):
The future for legal professionals ishelping develop and provide these tools.
That's one of the messages, notjust of that book, but also the
book I wrote with my son Danielon the Future of the Professions.
In your discussion of risk, you statethat you're upbeat about AI, but
then state, "I am, at the same time,increasingly concerned and sometimes

(16:45):
even scared by the actual potentialproblem that might arise from AI."
How can leaders balance thepractical aspects of these concerns?
What's fascinating is that so manyleaders, policy makers, government
officials, and politicians have beenfocusing completely, understandably,
on the risks of generative AI,the ethics of generative AI, and

(17:07):
I find this terribly limiting.
Now, I know in practical terms, we haveto address the systems that are out there,
but if we're doing long-term policy makingand long-term strategic thinking, we
have to assume that these technologiesare going to be far more advanced.
And so while I accept people are worriedthat today's systems, for example, could

(17:27):
be relied on and give rise to loss,damage, or injury, where I accept that
today's systems make mistakes, and weshould be very careful about their use
in high-risk areas, i'm far more worriedabout the idea that we have remarkable
concentrations of power, of capital,of data processing in a very small

(17:50):
number of profit oriented organizations.
And this globally far from helping solvethe poverty problem could in fact increase
the disparity between the rich and poor.
So we've a massive question of socialjustice that's lurking in the wings.
And while, as I say, I understandpeople worrying that this system
might give wrong legal advice.

(18:11):
Seems to me there arebigger fish to fry as well.
Similarly, people might be concernedabout their own individual jobs, but
there is a huge discussion to be hadeven if we don't achieve full-scale AGI.
Even if the systems we have today,as I think is inevitable, become

(18:32):
far more reliable, the impact on thelabor force is going to be gargantuan.
And I simply think that no politicians orpolicy makers are prepared to raise their
heads above the parapet and confront thisbecause there's a lot of focus, and I
say this is understandable on what I callshort-term AI rather than long-term AI.
So, short-term AI is generativeAI to automate what we do.

(18:56):
It's task substitution, it's efficiencygains, it's productivity gains.
It's why governments are interested.
It's why businesses interested.
It's why investors interested.
But we will not get to the future Ithink we want to get to by simply piling
more AI onto our old ways of working.
And in parallel, we need todo a lot of strategic thinking

(19:17):
about likely developments in AI.
That's why I an asked the question, whatif AGI, and once you get to thinking
about AGI, you're also talking about theworries, for example, of the weaponization
of AI that we might be putting.
Criminals and terrorist fingertipsa more powerful tool than they've

(19:38):
ever had at their disposal before.
So again, I don't want to minimize theidea of current AI systems giving rise to
problems and confusions, but I do thinkthat really is the tip of the iceberg.
At the beginning of the book, you includea quote it says you insist that there's
something that a machine cannot do.

(19:58):
If you tell me precisely what it is amachine cannot do, then I can always
make a machine, which will do just that.
And then later, you highlight that weshy away from these difficult questions
when we fixate on generative AI.
For this is simply the latestand certainly not the last.
When should leaders prompt thedeeper conversation about AI?

(20:20):
Since so many struggle now just tostart the practical conversation
about where it should go and how itshould affect, hopefully to empower
their team and their practices.
The Von Neuman point was tug incheek, but this was quite funny.
The question of whenshould leaders make a move?
The answer is now, but I think thekey is in the question there, because

(20:44):
I believe in a lot of professionalfirms, even at the top, what we have
is managers rather than leaders.
A lot of law firms, for example, over thelast few decades, haven't really needed
to do fundamental strategic thinking.
Their business plan, if you cancall it has been hopefully next year
we'll win a bit more work and we'llmanage to cut our costs a bit and

(21:06):
we'll use our profitability, but notthinking fundamental new questions
about what new markets should webe in or what new fundamental new
products do we need to develop.
What we've had, other than the blipsof and blips, is not to minimize
them, but other than the financialcrisis and COVID, which didn't

(21:28):
even then fundamentally changedthe business models of professional
firms other than these periods.
We've had fairweather managers andlaw firms and managers who's very much
focused in the next quarter and the nextfinancial year on the issues of the day.
I'm seeing for the first time,really, were needing leadership
and leaders are people who areprepared to spend time thinking

(21:51):
deeply beyond their term of office.
Leaders are people who are preoccupiedwith the sustainability in the
long run of the business, of whichthey're currently the custodian.
Leaders are people who are preparedto look beyond the current financial
performance to long-term viability,who're prepared even to forego short-term

(22:13):
profitability for long-term health.
And we haven't reallyneeded to think that way.
Even R&D, research anddevelopment, We haven't thought
that way in professional firms.
We've just been churning out oftenvery high-quality work, really
clever people charging, but theyare away we go and we're needing
leaders to think more fundamentally.

(22:35):
So when I meet a lot of law firmleaders and we talk about this,
many of them say with a straightface that's not gonna be my problem.
It's beyond my term of office.
And actually quite the contrary.
What you do and decide over the nextcouple of years is going to determine
the course of the firm in the long run.
So the answer to your question is, weneed leadership now rather than a kind of

(22:58):
managerial approach to legal services, alight hand in the tiller, a preservation
of how things have always been.
We approach, I believe, a fundamentaldiscontinuity in our economy in the nature
of professional services in the way inwhich clients and organizations will

(23:20):
require their guidance to be delivered.
And so this is goes well beyondthe traditional management role.
What skills should listeners developto become those kinds of leaders and
ambassadors for an AI-centric future?
There's two sets of skills issues.

(23:41):
One which I've written a lot about inthe things that Tomorrow's Lawyers and
The Future of the Professions is the newskills that young professionals will need.
That's rather different from the newskills of the leader and the leaders.
And it seems to me that leadersneed to have a curiosity and an

(24:02):
open-mindedness about the futurethat many professionals don't have.
A lot of strong leaders arequite dogmatic, and it goes
back to my theme in the book.
We've got to relax ourselves abit and open ourselves out to
lots of different possibilities.
So there's a mindset issue, which isn't adogmatic insistence in your own rectitude.
It's a willingness to immerseyourself in fields that are actually

(24:27):
beyond your fields of competence.
It's a willingness to look atother professions, industries,
and sectors and learn from them.
It's a willingness to lead by example.
It simply won't do to say, thisAI business is for younger people.
I can't program my videoequivalent of comment.
So it's leadership by example.
It's leadership by curiosity.

(24:49):
There's also, and again, in thecollegiality and consensus that many
professional firms aspire to securing,this is a bit countercultural, there's
going to have to be leaders who prepareto put their heads in the line, who are
going to be bold and say I know that manymainstream partners may not believe this,

(25:09):
but this is the way we are going to goin this firm and you've asked me to lead,
and that doesn't mean everyone will agree.
And these are new characteristics.
As I say that the traditional leaderhas often been either someone who,
perhaps a wonderful client handleror a wonderful winner or a wonderful
legal practitioner in whom there'sgreat confidence in amongst the troops.

(25:34):
This is a different kind of phenomenon.
This is someone who's bold,who's curious, who's open-minded,
who's willing to lead by example.
And I think we'll be looking atdifferent kinds of leaders in
professional firms in years to come.
This is Ari Kaplan speaking with theremarkable Richard Susskind, President
of the Society for Computers and Law,and author of many noted books, including

(25:59):
his most recent publication, How to ThinkAbout AI: A Guide for The Perplexed.
Richard, what an honor.
Thank you so very much.
Pleasure's mine.
Great to see you.
Hope we can meet in person before long.
Thank you for listening to theReinventing Professionals Podcast.
Visit reinventing professionals.com orari kaplan advisors.com to learn more.
Advertise With Us

Popular Podcasts

Stuff You Should Know
Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Special Summer Offer: Exclusively on Apple Podcasts, try our Dateline Premium subscription completely free for one month! With Dateline Premium, you get every episode ad-free plus exclusive bonus content.

On Purpose with Jay Shetty

On Purpose with Jay Shetty

I’m Jay Shetty host of On Purpose the worlds #1 Mental Health podcast and I’m so grateful you found us. I started this podcast 5 years ago to invite you into conversations and workshops that are designed to help make you happier, healthier and more healed. I believe that when you (yes you) feel seen, heard and understood you’re able to deal with relationship struggles, work challenges and life’s ups and downs with more ease and grace. I interview experts, celebrities, thought leaders and athletes so that we can grow our mindset, build better habits and uncover a side of them we’ve never seen before. New episodes every Monday and Friday. Your support means the world to me and I don’t take it for granted — click the follow button and leave a review to help us spread the love with On Purpose. I can’t wait for you to listen to your first or 500th episode!

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.