All Episodes

May 2, 2023 28 mins

In this episode, Ramsay Brown (Founder & CEO, Mission Control) and Andreas Welsch discuss how leaders can trust generative AI in their business. Ramsay shares his perspective on responsible AI practices and provides valuable insights for listeners looking to implement AI.

Key topics:
- Hear the key questions for responsible AI
- Address common AI trust challenges
- Learn about the future of work with generative AI

Listen to the full episode to hear how you can:
- Align AI for ESG goals
- Understand responsible AI greenwashing
- Create better policies for AI use
- Prepare for the future of work with AI
- Take action to build trust in generative AI

Watch this episode on YouTube:
https://youtu.be/AOyITl4Khvg

Questions or suggestions? Send me a Text Message.

Support the show

***********
Disclaimer: Views are the participants’ own and do not represent those of any participant’s past, present, or future employers. Participation in this event is independent of any potential business relationship (past, present, or future) between the participants or between their employers.


Level up your AI Leadership game with the AI Leadership Handbook:
https://www.aileadershiphandbook.com

More details:
https://www.intelligence-briefing.com
All episodes:
https://www.intelligence-briefing.com/podcast
Get a weekly thought-provoking post in your inbox:
https://www.intelligence-briefing.com/newsletter

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Andreas Welsch (00:00):
Today we'll talk about how leaders can actually
trust generative ai and whobetter to talk about it than
someone who's focusing on justthat trust in ai.
Ramsay Brown.
Hey Ramsay.
Thanks for joining.

Ramsay Brown (00:13):
Thank you so much for having me.
It's an honor to be here.

Andreas Welsch (00:16):
Awesome.
Hey, why don't you tell us alittle bit about yourself, who
you are and what you do?

Ramsay Brown (00:21):
Absolutely.
Thanks.
I'm Ramsay Brown.
I'm the CEO of Mission Control,an AI safety SaaS company,
building the AI trust ecosystem.
We view our mission as toaccelerate quality, velocity and
trust in the data sciencelifecycle on generative AI and
that's reflected in our project,our products and our projects

(00:42):
and our communities.
We focus on how to help teamswin with AI trust at scale, and
that means being able to deploythe kinds of things that help
them move faster while breakingfewer things.
This is just about all we thinkabout is the landscape of AI
trust, and we're really gratefulto get to discuss this with you
today.
So thanks so much for having me.

Andreas Welsch (01:03):
Awesome.
Thanks for hopping on.
Yeah.
Really the point of movingfaster and breaking fewer things
is, something that deeplyresonates with me.
Especially as I've been lookingmore into generative AI and
trying to make sense of it, whatit all means and what it leads
to and what the opportunitiesare.
Ramsay, should we play a littlegame to kick things off?
What do you say?

Ramsay Brown (01:22):
Let's do it.
Let's do it.

Andreas Welsch (01:24):
Fantastic.
So this game is called In YourOwn Words.
And when I hit the buzzer, thewheels will start spinning.
When they stop, you see asentence and I'd like you to
complete that sentence with thefirst thing that comes to mind
and why, in your own words.
So to make it a little moreinteresting you'll only have 60

(01:45):
seconds to do that.
Are you ready?

Ramsay Brown (01:49):
Hit me.

Andreas Welsch (01:50):
Okay.
Perfect.
Then let's get started here.
If AI were a movie genre, whatwould it be?
60 seconds.
Go.

Ramsay Brown (02:01):
Bildungsroman.
Bildungsroman is a genre ofnovel that translates really
well into film.
That is the coming of age story.
And it's so easy to talk aboutAI or AI as apocalypse or AI as
procedural drama or AI asutopia.
But none of that cuts it right,because right now you are living

(02:25):
through the puberty ofartificial intelligence.
You're living through theawkward growing phase in which
we are moving from thelaboratory to the living room,
and as we find that capabilitiesand market penetration are
predictably downstream ofcapital market's ability to
continuously funnel privateequity into the synthesis of

(02:47):
machine intelligence, everythingthat's happening is what happens
when we go from systems that arevery hard to use, very expensive
to build, and very lowcapabilities too.
These are now pervasive.
They are capable, they areaffordable, they are
frictionless, and now we canstart truly building on top of
them.
And that process is a coming ofage story.

(03:10):
You are currently living throughthe awkward transition period.
For artificial intelligence.
And to me, there's no bettergenre than Bildungsroman, which
is that story of, moving outfrom your hometown, girl meets
boy, meets girl, find oneself,figure out place in society.
That's the genre.

Andreas Welsch (03:29):
Awesome.
Thank you so much.
That's a very unconventionalanswer.
Like you said, definitely notwhat I had expected, but great
to hear how you phrased that in.
Again, to your point, I thinkthat's really what we are seeing
right now over not just thedecades leading up to this
point, but now also ourreckoning of what are we doing
with this and, where noise isgoing.

Ramsay Brown (03:49):
The Dartmouth conferences on artificial
intelligence were 67 years ago,and here we are now with the
fastest time to a hundredmillion users for ChatGPT.
And anytime anyone said oh, thisis a flash in the pan, we're
forgetting that what you'restaring at is not a novel parlet

(04:10):
trick.
You're staring at a generalpurpose technology at the scale
of electrification.
For the automation of andsynthesis of, symbolic reasoning
tasks.
There's no way that's not gonnabe one of the biggest things
that's ever happened, period.
And from this perspective likethat, it's that we're now here
as opposed to where we're even,10 or 15 years ago when I was in

(04:31):
my PhD work to where we'regoing.
So it is wildly different.
So I really do couch it as,yeah, we are in the transition
phase and something is actuallyhappening now.
And that's why it feels the wayit does.
That's why you and I are talkingabout this on a Tuesday morning.

Andreas Welsch (04:44):
Hey why, don't we jump into to questions.
And again, for those of you inthe audience, if you have any
questions, please feel free topost them in the chat.
We'll pick them up as we go.
And we have a lot ofknowledgeable members joining as
well in the audience.
So feel free to get a dialoguegoing there as well.

Ramsay Brown (05:00):
Yeah, and I see Michael Novak.
I see your comments.
It's not electricity.
This is fire, man.
There is a Prometheus metaphorhere.
For sure.
Because the, thing that we aredoing is a is, a fundamental
transformation of, not matterper se, but information for
which so much of matter,capital, energy and identity is

(05:22):
downstream of our ability to useinformation effectively.
So yeah, I see that.
If not electricity, fire, that'snot a bad metaphor either.

Andreas Welsch (05:30):
Yeah, exactly.
Maybe a question for you.
You've held the first annualleaders in Responsible AI Summit
in partnership with JesusCollege in Cambridge last month.
You were able to get together avast number of experts on the
topic of AI ethics and trust andresponsible AI.
So I'm curious as we get intothis topic of how can leaders

(05:52):
trust generative AI?
What were some of the key topicsand recommendations that leaders
building AI products now need toknow that you've discussed in
Cambridge with these experts?

Ramsay Brown (06:03):
Yeah, so I'd say that first two points of immense
gratitude.
First, to my counterparts Julianand his team at the Intellectual
Forum at Jesus CollegeCambridge, for extending the
opportunity to come and hostsuch a special and high impact
day with them.
And the second to the attendeesof the summit for whom we would
have, we would've nothingwithout their brilliance.

(06:24):
It was a very unique opportunityto get a relatively
intersectional cross section ofsome of the world's leading
thinkers and practitioners inthis space together under the
Chatham House rule to discussfour topics that are at the
forefront of our ability tomeaningfully trust AI and
generative AI, and what's comingnext after it.
Now, those four topics were howour use of AI aligns or fails to

(06:48):
align with ESG goals.
And what we're thinking about asthe world is changing and how,
what sort of role we are to playin reversing biosphere collapse.
The second is what sort ofpolicies we would recommend to
policy makers to help them makebetter policies faster?
The third was what is and isn'tworking when it comes to

(07:10):
responsible AI movements and howwe trust AI.
And then finally, as the futureof work becomes the today of
work, especially with generativeAI and what's coming down the
pipe for a agentic AI andsynthetic labor, what are we
doing to prepare ourselvesrobustly for a world in which
the cost of knowledge work dropsto zero in the next 18 months?

(07:31):
These are extremely largequestions.
There's no way to not confrontthe gravitas of the question at
hand on each of these.
And the takeaways are actuallybeing summarized right down by
my research team.
It was an interactive workshopping day of about 70 people.
And everyone was handed a pad ofsticky notes and a Sharpie and

(07:52):
then set down in groups tocollectively answer these
questions.
And we're actually going to havea anonymously authored summary
paper available shortly.
In about the next six weeks foranyone who's interested in what
the findings and takeaways fromthe summit really were, but I
can touch on at least a few ofthem now.

(08:13):
To give you a little bit of a,snapshot of where we stand.
The first of which is that basedon our current trajectory, it
seems like a lot of what's goingon in responsible AI runs the
risk of becoming something akinto greenwashing that's happened
in ESG.
And for those who aren'tfamiliar with greenwashing.
As a reminder, greenwashing iswhat happens when a large

(08:35):
corporation appears to createthe image of them taking the
steps that it would they'd needto take to reduce their
ecological impact.
But in fact, it is just an imageand they do not actually
meaningfully operationalize anyof it becomes more of a PR thing
than anything.
When we look at what's going onin responsible AI, one of the

(08:56):
big outstanding questions is,are we doing something similar
to that?
But with AI where largeorganizations are saying look,
here's our principles, here'sour practices.
We have this framework, and thenyou go talk to data science
practitioners and you hear thesame answer of my incentive
structures haven't changed.
I'm not given tools.
Our ops team doesn't talk to ourdata science team.
No one talks to compliance, andcompliance doesn't talk to us.

(09:18):
We all operate in fiefdoms andwe found that this is an
unsustainable way to do this cuznothing's actually getting done.
We do run that risk.
That is still a very real riskright now, and it's important to
note that when we look at themeaningful jobs to be done.
Around responsible AI, it doesappear that while there still
may be last mile edge case orlong tail concerns around some

(09:41):
of the fundamentals of AIethics, it feels like more and
more of the conversation ismoving towards governance and
the practical steps that must betaken within data science teams
to transform.
Ethical recommendations intospecific concrete actionable
steps that are actually playedout in data science notebooks,
not PowerPoint slides, not riskanalysis frameworks, but in the

(10:04):
actual live impacted flow ofdata science.
And that's something that wesend a lot of time focusing on
internally within ourorganization to produce
solutions for.
And we are hearing from more andmore teams that's where the
problems really lie.
The second topic around metapolicy.
One of the big takeaways wasthat policymakers are
predominantly trying to writepolicy for a world that's

(10:27):
previously existed as opposed toa world that is going to exist.
And this is a major problem withpolicymaking writ large.
Policy because of the incentivestructures of policy makers.
You can't go back to yourconstituents and say we did some
speculative forecasting for aworld that doesn't exist.
And I got some science fictionauthors and some data scientists
and some people who are good ateconomic forecasting, and we

(10:50):
think this is the world that'sgoing to exist.
So I'm gonna write policy forthat.
If you do that, you'll get votedout of office.
It turns out that's exactly whatyou need to do, though.
The rate of change around ourability to meaningfully harness
intelligence and the price pointof doing so is going in opposite
directions extremely fast.
We have never been able tomanipulate thought mechanically

(11:13):
or biologically as effectivelyas we can today.
That number, that capacity isincreasing dramatically and an
accelerating pace.
The cost of doing so isdecreasing when that happens.
Almost no existing social orpolitical systems we have in
place today are capable ofmeaningfully organizing that
world because it has neverexisted before.

(11:35):
This is the reason that politicshas such a hard time with black
swan events is there's noprecedence you can set for this.
We, since it didn't happenbefore, we can't write a law to
it.
And this is what we're seeingeven right now with the EU AI
Act, which is absolutelysuffering from Brussels syndrome
and is being rewritten and movedin its goalposts because they're
coming to understand thatthere's no way that you are

(11:57):
going to write a law right nowthat is going to meaningfully
get out ahead of capabilitiesfast enough for these
technologies or their use cases.
So this is as Marsh McCluen putsit, the problem of trying to
drive a car by only looking inthe rear view mirror and using
that as what you're looking atinstead of the road.
And politics is by definition arear view mirror kind of
process.

(12:18):
So that's been highlighted asone of the concerns we need to
address in terms of what is andisn't working.
What we're coming to find isthat there's this gap between
the compliance sides of thehouse that are responsible for
how we need to abide by laws.
What are the risks and controlswe need to have and factors of
trust we need to abide by as anorganization.

(12:39):
And then the data science sideof the house.
Data scientists don't knowanything about compliance, and
compliance teams do not anythingabout data science.
And I'm making a really grossgeneralization here on purpose.
And I realize that this is not acompletely fair representation
of reality.
And there's lots of datascientists who understand
compliance and lots ofcompliance officers who
understand data science.
But writ large, the incentivestructures that have emerged in

(12:59):
large organizations are suchthat these two groups of people
own distinctly responsible andaccountable different lines of
this practice.
Yet they need to beintersectionally merged such
that they can collectivelybehave and own each other's
outcomes better.
And this is not happening.
And this friction between thesedepartments is where a lot of

(13:20):
the problem lies.
Breaking that is not just atechnical problem, it's a
cultural problem.
And then finally, in terms ofthe future of work and, what
this means, this is probably theclosest to where generate AI is
cutting into the bone right now.
And the consensus isn't clear onwhat this impact is going to be.
So I'm not going to speak forthe whole of the attendees, but

(13:40):
I'll speak for our opinions herewithin our organization.
The impact of generative AI isgoing to be a discontinuity from
previous ways that we've used AIwithin the enterprise.
So previously, AI has been usedfor predominantly tasks of
prediction, classification, orcontrol along business unit
lines for which we needed to beable to make a small decision or

(14:01):
automate a small process, andthat could have been at any
point in the integrated valuechain.
The problem here now is thetypes of things that generative
is capable of doing, appear tobe more robustly general than
previous.
So this is not I need aclassifier model for determining
whether or not we should restockthis product and then make the
necessary adjustments to oursupply chain as much as it turns

(14:26):
out that.
If I have an adequately powerfulmodel capable of language, some
of the latent capabilities inthat model may look so far a
scan from what we actuallythought the job to be done for
this model was that suddenlyit's capable of doing lots of
different types of jobs withinthe organization relatively

(14:47):
effectively.
And the this is on the heels ofthings like Goldman Sachs report
or our upcoming LaborTransformation Playbook in which
my organization has scored the1016 jobs.
The Department of Laborrecognizes for the 40,000 odd
tasks that those jobs are madeof and how every single one of
them is amenable to beingdisrupted and replaced by

(15:09):
generative AI and e ai.
This is the structure of theproblem, is that when we look at
the incentive structures thatchief Financial officers are
under.
If you tell a CFO that thereexists a new batch of
technologies that are capable ofreducing their operating costs
substantially by reducing thedemand for labor.

(15:33):
There's not a CFO alive whounderstands the games that
they've signed up to play.
That's not going to make thedecision to do a standing
reduction in force on themajority of their business units
when generative AI and AI arecapable of doing those jobs, not
getting rid of everybody, butlooking at standing reductions
in force because you're able toaccomplish the same productive
output with less human laborinvolved in that process.

(15:56):
That the consequences of thatdecision and the consequences
of, where we're going in termsof the following price point of
the jobs to be done becomingautomated is I think going to be
one of the largest conversationsin the social discourse from
here on out.
Because these tools are leavingthe phase of isn't it funny to

(16:17):
see the Pope in Balenciaga to myteam's now using these and we're
now on a permanent hiring freezelike every other team.
And, that was the nature of theconversation of the summit is
what do we do about these typesof problems?

Andreas Welsch (16:29):
Perfect.
So how can our audience learnmore about it when the paper
becomes available?

Ramsay Brown (16:36):
So we have our magazine newsletter Accelerate
which you can sign up for attakecontrol.ai/accelerate.
And I'll be happy to drop a linkinto the feed here a little
later where we'll be announcingthe release of the paper and we
look forward for anyconversations and discourse

(16:56):
around this.
Honestly, we're so grateful tothe attendees from the summit
who came and joined.
And we really look forward tobeing able to, broadcast more
thoroughly what the currentstate of the arm, what the
current snapshot of thediscourse is around this,
because we think that we wereable to build a very safe space,
to be a little more comfortableto talk about what's really
going on.
And I think the, story is quitecompelling and, we want to get

(17:20):
it out there.

Andreas Welsch (17:22):
Awesome.
Thank you for sharing it.
You, didn't promise too much.
When you say those, are some bigquestions.
Certainly judging by theanswers, right?
We need to have even morediscourse and more dialogue
about them and what the impactis and what all of that actually
means.
Yeah.
But maybe we can bring it tobusiness and make it more

(17:42):
tangible.
From this 30,000 foot view, whatdoes all this mean?
To what can I do with it?
And, what can I do today?
And especially when we think oftrust and trusting this
technology with all its benefitsand limitations Yeah.
That we already know and, haveseen over the last couple of
months.
What would you recommend, whatcan leaders already do today to

(18:04):
trust generative AI in the formsthat we are seeing it right now?

Ramsay Brown (18:10):
Yeah.
So we're gonna go three veryspecific things.
That leaders can do to starttrusting generative AI better.
And it is an active process.
It's not a passive thing.
The first is your people, thesecond is your processes, and
the third is the technologyitself for your people.
There are two things that youneed to be doing with your
people to improve generative AIsuccess and trust.

(18:34):
And this shouldn't come as anysurprise.
These are culture and training,but there's two particular types
of things here.
The first of which is the teamsthat are going to succeed with
these tools are the teams thatare going to succeed in the
market because these are goingto confer strategic, competitive
advantage for your organizationthat is going to distinguish you

(18:55):
on capabilities, price point,quality of service, or margin
because you're able to operatemore efficiently using them.
That means your organizationneeds to be constantly reviewing
as an internal team, how are wewinning using these tools?
And that comes from the top inmy organization.
Every week we have a reviewinternally of how did you get

(19:18):
your job done this week usinggenerative ai?
These are not a, I better notcatch you using it.
It is a, I better not catch younot using it.
This is like why did I, why didwe buy nice MacBooks?
Why did, why do we use goodtools?
Go use the best tools you can toget your job done.
So that's a culture thing thatcomes from the top.
As a leader, you need to besaying that culture, that this

(19:39):
is the way we do this now.
And the second is you need to beinstilling the culture of how to
think about these with thatcritical theoretical lens of,
yes, I know the output saidthis, but does that even make
sense for what I'm trying toaccomplish?
Because if we treat these, likethey're always going to be
clear, buoyantly perfect toolsthe first time around, and
they'll always create the rightoutputs.

(19:59):
We're gonna miss the point thatthey're not.
Their tools for accelerating ourwork.
They're not yet to the stagewhere they can be trusted
verbatim to provide businessaccurate and business valuable
responses.
Now, that doesn't mean we get toget rid of them entirely.
Quite the opposite.
It means like everything else,you need to apply your critical
thinking to what's going on.
So that's the first.
That's your people.
The second is your processes.

(20:20):
You need to have a processreview of where within your.
Business value creation flowsfrom understanding your market
through to customer success andcorporate strategy.
Where in each of these and yourvalue chains do these tools fit
in specifically?
Yeah.
This is one of the things thatwe think about a lot and we help

(20:41):
teams figure out is, when welook at how we do things, where
does generative even fit?
You need to be analyzing thoseprocesses to determine where
these tools amplify or augmenthow your people are already
operating.
And then the trust component ofthis is you need to have
governance policies in place,which are specific.
Actionable, measurable,recordable, accountable

(21:02):
documentable steps that teamsare taking.
Either as they build these toolsor as they use third party
tools, you need to be able todemonstrate in systems of
record.
Here's the steps we are takingto know that we are accountable
for mission success using thesetools and we've taken this
seriously.
That's the process for trust.
And then for the technologiesthemselves, you need to be

(21:25):
either developing in-housecapabilities around generative
or using third party.
Third party.
The open, the stuff that we haveavailable, the open ais, the mid
journeys, the stable diffusionsand everything built on top of
them have a fundamental barrierfor trust around data security
and data privacy, which has beenour big innovation with our gen
ops platform for blockingsecrets from getting leaked into

(21:47):
generative AI systems, which hasbecome a huge problem.
You see it in the news now thatAmazon blocks ChatGPT, because
they found its trade secretsleaking in.Samsung had a massive
breach the other day wherepeople were copying and pasting
secrets into ChatGPT from one ofits chip fabrication facilities.
And across the private publicsector, we keep hearing the same
thing.
Our options are either to banthis technology or we keep

(22:10):
leaking secrets.
We've developed a solution thathelps people and organizations.
Use ChatGPT without leakingtheir data into it.
And we advocate for technologiesthat improve, practically
speaking, not just measurementsor policies around trust, but
the actual act of using thesetools.
We live in a time where we don'thave to keep using checklists to
solve everything.
We can actively intervene inautomatic processes to improve

(22:34):
their safety and their scale.
Those technology layers.
And the whole emerginggenerative ops, like field and
FM ops field and prompt opsfields are going to be every
business leader's best friend inthis process because they
provide the businessintelligence tooling and the
trust level and trust layersthat are otherwise missing from

(22:56):
how these tools operate.
So those are the three things,our people, our processes, our
technology.
Make sure your team knows whatthey're doing and is from the
top incentivized to do it.
Retune your business processesto understand where these fit in
and capture value, and theninvest in tooling that create
the trust layers between you andthe third party services that
you wanna depend on.

(23:17):
Those are the, specifics, I'dsay.

Andreas Welsch (23:19):
Awesome.
Thanks for sharing, and thanksfor sharing in, that level of,
detail and structure of how yourecommend leaders go about this.
Looking at the chat, I also seethat this really resonates with
our audience and I saw Maya'scomment from earlier.
Where she compares it to theindustrial revolution in this
coming dawn right.

(23:40):
Of the technology and yeah usnot even knowing exactly where
this will end up and, what thiswill power.
I think that comparison Maya wason steam engines, and where they
were used and how they were usedto power the economy in a new
way.

Ramsay Brown (23:55):
Yeah.
And to her point this is why theWorld Economic Forum has its
center for the fourth industrialrevolution.
We are in a completelyqualitatively different time
period from where we've beenpreviously that we're stepping
into.
When we look at this like what,does 2023 to 2025 look like?

(24:16):
We have to remember we haveactive warfare going on, around
the world both literal andhybrid.
We have a major US electioncoming up.
So we were tasked recently by apartner to, to create some
meaningful predictions aboutthat timescale.
And we see the rise of a agenticAI.
We see major crises around deepfakes.
In the next US election, we seeAI being used in hybrid warfare,

(24:38):
embodied humanoid roboticscoming down the pipe.
Already we've seen calls fornational AI bans, so we've
already been got to check thatone off our prediction list
with, what's going on in the eu.
We see attempts at regulation ofGPU control.
We think that the AI deservesrights discourse is going to
accelerate much faster than theAI Bill of Rights discourse.

(24:58):
And if we look even at the, longtail of this, we look at the
rise of synthetic labor and whathappens when an LLM and a Python
harness strapped to an Ethereumwallet is no longer a tool, but
a self owning agentic system,and it asks to work at your
company.
How do you regulate that?
Like how as a business person doyou trust a, an agentic
autonomous, self owningintelligent system?

(25:21):
That's not a sci-fi question.
That's a 2025 question.
So to Tom's point, we areabsolutely in something to the
extent of the industrialrevolution but, of the mind as
opposed of the muscle.

Andreas Welsch (25:33):
Awesome.
And great to hear also theimminence of it, that it's not
decades into the future.
But that what we're seeing todayis already the first step to to
more widespread use and evenmore capable systems and agents,
to your point.

Ramsay Brown (25:51):
And there's only one thing I wanna tack onto
that, which is to the point ofthe author William Gibson.
The future is already here.
It's just not very evenlydistributed.
Two things are true.
Very large enterpriseorganizations move very slowly
to make decisions about thedeployment of new technologies.
That is true.
So if you're someone listeningin this stream and you say,
yeah, but my company can't movethat fast.

(26:13):
You are correct.
And we are at a breakneck paceto build artificial general
intelligence.
My team included.
Both of these things aresimultaneously true.
And as much as people like tothink about the, term
disruption, meaning guys andhoodies in San Francisco.
Practically speaking, theeconomist, Joseph Schumpeter,

(26:34):
coined the term of creativedestruction to point out that
capital markets are very good atdismantling slow organizations
to funnel their capital moreeffectively towards faster
organizations.
And if we remember that thefuture is here.
It's just not very evenlydistributed.
This can be a helpful lens forbusiness decision makers in
understanding the imperatives tostart accelerating the adoption
of some of these technologies.

Andreas Welsch (26:56):
I think that's an awesome note to end on
today's show.
So I would like to thank you forjoining and sharing with us what
you've seen, what you'vediscussed at the event that you
held in Oxford last month.
How you see that this willimpact not only the future of
work and what we're living inright now, but also our society

(27:22):
more broadly and the role thatwe all play in this as leaders,
as experts in that space.
So Ramsay, thank you so much forjoining us and for sharing your
expertise and for those in theaudience for learning with us
today.

Ramsay Brown (27:34):
Hey, thank you so much for having me.
It was really an honor.
Advertise With Us

Popular Podcasts

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

24/7 News: The Latest

24/7 News: The Latest

The latest news in 4 minutes updated every hour, every day.

Therapy Gecko

Therapy Gecko

An unlicensed lizard psychologist travels the universe talking to strangers about absolutely nothing. TO CALL THE GECKO: follow me on https://www.twitch.tv/lyleforever to get a notification for when I am taking calls. I am usually live Mondays, Wednesdays, and Fridays but lately a lot of other times too. I am a gecko.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.