Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Libby Hall (00:12):
Welcome to a special
Oyster Stew podcast series
presented by CRC Oyster andMorgan Lewis.
This week's episode is partthree of the series.
Make sure you listen to theother episodes, where you'll
hear essential insights fromindustry veterans with decades
of hands-on experience in wealthmanagement, trading technology
and SEC enforcement.
(00:33):
If you'd like to learn moreabout how CRC Oyster and Morgan
Lewis can help your firm, visitoysterllccom and mor
MorganLewiscom.
Pete McAteer (00:52):
With the evolution
of AI and the vast frontier
facing the financial servicesindustry, oyster Consulting and
Morgan Lewis have partnered tobring to light some of the
critical challenges, threats,risks and the opportunities that
we're all thinking and talkingabout.
The team of experts joining metoday are Carolyn Welshans, a
partner at Morgan Lewis in theSecurities Enforcement and
Litigation Group, where sheadvises and defends financial
(01:15):
institutions, companies andtheir executives in
investigations by the SEC andother financial regulators.
Dan Garrett, a managingdirector at Oyster Consulting,
with 30 years of wealthmanagement experience, working
at RIAs, broker-dealers andclearing and custody firms
running operations andtechnology groups.
Jeff Gearhart, also managingdirector at Oyster Consulting,
(01:36):
with 35-plus years of capitalmarkets experience in senior
leadership roles withinstitutional broker-dealers.
And me.
Pete McAteer, a ManagingDirector of 25 Years in
Financial Services, leading manyof Oyster's management
consulting engagements thatinclude firm strategic direction
, platform strategy decisions aswell as execution, with a focus
on readiness and changemanagement.
(01:57):
Thank you for joining us.
So, dan, the first question iscoming to you have you
encountered instances where AIdeployments led to unintended
consequences?
What lessons were learned about.
Dan Garrett (02:27):
It's been in the
papers, been around which is
somebody using AI and notverifying the information that
it provides back.
So we all know that AIhallucinates, it makes up things
.
That's not a bug.
It's as much as it's beingcreative, like human brains.
Sometimes it gives you creativethings to say or to talk about,
which is great when you want itto be creative, not great when
you're trying to look for factsand you're displaying what
(02:49):
you're getting out of AI asfacts.
And so the recommendation thatwe have is always verify trust,
but verify anything that thesegenerative AI models provide to
you.
One of the tips that I like toprovide is just ask the AI model
to provide you with its sourcesand double check those sources.
(03:12):
They can provide hyperlinks tosites.
It'll go there, you can look atit, and that's a great way to
just double check that what it'sproviding to you is accurate.
But the other thing I wanted totalk about was very specific to
our industry and some thingsthat I heard.
That was some stories that Iheard from financial advisors.
In our space, there's been aplethora of AI agents that are
(03:36):
being used for note-taking, sothey are being used by financial
advisors to listen in on thephone call that they have with
clients and that helps them withnote-taking and it helps them
summarize the call.
They can chat with itafterwards and say you know what
did we talk about, what werethe takeaways from the phone
call, and the generative AI willpresent back, you know, just a
(03:56):
nice summary of the call.
It'll provide a to-do list thatyou can take and you know
provides you know, providespotential time savings and what
you hear a lot of times.
It might be some hype from someof these different providers,
but this can save up to five to10 hours of time for financial
advisors in that they don't haveto take notes anymore, they
(04:17):
don't need to transcribe notes,they don't need to provide notes
to their sales assistant.
Sales assistant can go in andtalk to the chat bot about that
conversation that was had andget the takeaways, and so there
is a potential opportunity andI've talked to financial
advisors saying, yeah, it's agame changer for them.
It's been wonderful.
(04:37):
However, I talked to anothergroup of financial advisors that
said, no, it's an absolutenightmare.
It's terrible, because rightnow what they have to do is they
have to then get the transcriptfrom the AI conversation.
They have to review thetranscript to make sure that
everything that was said soagain, comparing their actual
(04:58):
notes to what the AI isproducing is saying is correct,
verifying it, making changes toit and then providing it to
compliance to be recorded.
So how can you take somethingthat, for one group, is saving
them 10 hours a week and anothergroup where it's costing them
time because they're goingthrough and reviewing it?
(05:20):
And what I'll say is it's allabout adoption and the way that
some of these applications areput into place, and so I put it
out there as just a warning thatwhen you think about
implementing generative AI andusing it in different models and
so forth, really think aboutthe consequences to the process
(05:43):
and the flow and therequirements that you're using
it.
Now, some firms' compliancedepartments may not be requiring
that or their financialadvisors to review and make
changes and store those notes,and others may.
We can get into whether theyshould or not, but it's about
implementation and thenunderstanding the consequences
(06:03):
of what happens there.
So those are two that I justwanted to point out.
Pete McAteer (06:08):
Hey, dan, just a
quick clarification on that
second piece.
Do you think it has more to dowith the firm's policies and
procedures around managing theAI tools?
Absolutely Okay, and that's.
I guess Carolyn might want toweigh in on this when we turn to
her as well.
I didn't want to step on toes,but it just feels like you could
(06:31):
really create some onerousoversight and review if you
didn't trust and didn't have theexperience with the tool set.
Dan Garrett (06:40):
Absolutely correct.
Okay, Absolutely correct.
And we could get into an entirediscussion around that of
whether these recorded phonecalls are admissible and things
that you should be storing inyour books and records or not.
Some firms are arguing that no,there isn't a transcription,
it's just the AI has learnedabout the call and you can talk
to it about it.
(07:00):
Yes, you can ask it to create atranscription, but that's only
if you ask it to do that, right?
So at that point, thetranscription of the call exists
and then it should be storedand it should be made sure that
it's accurate, right?
So there's a lot of gray thereand ways to think about it, but
it absolutely comes back topolicies and procedures, and
(07:23):
thinking these things throughbefore you run out and implement
a system is to think about thepolicies and procedures that
you're going to put in place andreally think through is this
going to make things better orworse in terms of operational
efficiencies?
Pete McAteer (07:37):
Okay.
So, Carolyn, I'll turn it overto you for your feedback.
This is right up your alley now.
Carolyn Welshhans (08:02):
People just
generally have been able to
identify efficiencies andpositive things that it can
provide to a business, includingin the financial area, but at
the same time, you've got tothink about the fact that, ok,
so what are, though, theregulatory requirements?
What is this going to mean forour governance?
What are the things we've gotto think through of how this
fits into either what we'realready doing, or does it create
a new obligation on our endthat we didn't have to deal with
(08:24):
before?
And that's not to say youshouldn't adopt the AI if it
makes sense for your business.
It's just that you've got tothink all of this through, think
through the differentregulatory regimes that might
apply to your business, and thenwhat do you do as a result?
Pete McAteer (08:42):
Okay, awesome.
Thank you, carolyn.
Just a quick question, jeff,just in case you've thought
about this or maybe seensomething out there in this
space.
But high-touch trading desks,they're talking to their firms.
You see this as it's reared itshead in that space.
Jeff Gearheart (09:02):
In high-touch
trading desks.
I would say not as much as thealgo market-making desk and the
model-driven desks.
It's really where you see heavyuse of AI, heavy use of data,
ai to manage that data and maketrading decisions.
That's where it's really cominginto play.
The high touch desk is still alot of the good old-fashioned
(09:23):
voice communications, providingguidance to, to the clients and
moving on from there and I guessthose are already recorded
lines and the transcripts wouldjust be additive to the existing
policies and procedures, right?
fair when you think about a usercoming into a high touch desk.
They're seeking guidance onbringing a large position into
(09:47):
the market to liquidate it oraccumulate or something of that
nature.
So they're looking forconsultation and I'm pretty sure
they're still going to want totalk to their trading or sales
trader, if you will trading, orsales trader, if you will.
Pete McAteer (10:11):
So Carol, what
legal repercussions can firms
face?
Carolyn Welshhans (10:13):
if AI systems
fail or cause harm.
So, just like Dan before, I'mgoing to pick one situation to
kind of pick on here, and Ithink the one that people maybe
have thought about the most, orthe worst case scenario they've
thought about, is AIhallucination when it comes to
trading.
What does that look like, whatare those risks and what could
(10:33):
result?
And so I've thought about thatin terms of.
I think the closest analogy isalgorithmic trading.
We've already seen that.
I think it's in some ways avery close cousin, and the SEC
has brought cases there wherethere have been allegedly
runaway algos or tradingalgorithms that didn't perform
(10:56):
the way that they were supposedto and resulted in a flood of
orders, for example, going tothe market.
And in those situations the SEChas brought cases against the
broker-dealers involved underRule 15C35 of the Securities
Exchange Act of 1934.
It's sometimes referred to asthe market access rule, and what
(11:18):
that rule generally requires isthat broker-dealers have some
pretty specific types ofcontrols in place.
They really come down to somefinancial risk and some
regulatory risk controls thatare really designed with the
intent of preventing what peoplein the past have referred to as
a fat finger error.
You know, somebody enters anorder for a million dollars when
(11:41):
they meant a dollar, or youknow a million orders when they
meant one, you know you put intoo many zeros, sort of thing,
and these controls are supposedto be in place to make sure that
if that order, that erroneousorder, would exceed, for example
, a credit or capital thresholdfor that specific customer and
for the broker dealer itself,the order gets blocked.
(12:03):
It doesn't ever get placed.
So you can see how that'ssomething that might be looked
at if there were an algo thathallucinated and then similarly
placed a bunch of orders thatrun contrary to the financial
risk model of a broker-dealer orits customers, for example, or
something else about theirtrading, and so again, kind of
(12:27):
like we were talking about amoment before, that's not
necessarily a reason to notadopt AI if it makes sense for
your trading model and yourbusiness, but I think it just
means you've got to think aboutthat sort of rule if it applies
to you as a broker dealer, andhave you thought about how
algorithmic trading in the past,if you've done it, or even if
(12:49):
you haven't might now beimplicated by AI under this sort
of rule?
How do you make sure that yourautomated controls are keeping
up with, for example, generativeAI.
That might be changing overtime, so are you thinking about
how to make sure that you'resurveilling for those controls
(13:10):
once you have them in place andthat you're comfortable that
you've got that control?
So I think that's one kind ofvery specific example of the
legal repercussions that couldcome about when we're talking
about AI and trading andfinancial firms.
Pete McAteer (13:27):
Terrific.
Thank you, carolyn Jeff.
I'm going to turn to you.
Anything else to add there?
Jeff Gearheart (13:33):
I think that
that is actually an excellent
example.
We do a lot of work around themarket access rules and we're
well aware that there are a lotof large penalties and fines
that can go into place.
That's just from the regulatoryaspect.
Then there's also the tradinglosses and the true financial
losses you're incurring.
It's a big deal and when you'reusing these models or AI to
(13:55):
guide the models, things can gohaywire pretty quickly.
So you've got to have the rightcontrols in place, not just on
the credit and capital aspect,but the erroneous order controls
testing that's involved, whichis actually required by the rule
to do the certification.
There's a lot firms have to dowhen you have direct market
access and you're using tradingmodels and AI to make decisions.
(14:16):
So excellent point.
Pete McAteer (14:20):
So, Jeff, how can
firms proactively identify and
mitigate risks associated withAI in trading operations?
Jeff Gearheart (14:28):
Thanks, pete.
I think there's lots of ways toanswer this and I'll give some
specific examples, but I thinkall the core risks have an
underlying theme and that's thatindustry knowledge and
expertise is essential.
It's key to managing andmitigating the risks.
So, in other words, artificialintelligence great.
Somebody needs to know whatit's doing and to evaluate the
(14:52):
results.
I think it's going to become alarger problem when you talk
about trading, operations andsettlement functions.
It's not the glamour part of theindustry and that's where we're
losing a lot of industry andinstitutional expertise.
So people are retiring ormoving on or things of that
nature and, to be clear, nobodywants to go into the securities
(15:13):
industry to be an operationsprofessional.
They're all looking at the sexyside of trading and model
development, things like that.
So you need to make sure youhave the right people there.
Key staff is essential tounderstand the basics of the
process and evaluate the resultsof the AI results, the trends,
the data analysis, things ofthat nature, so that, first and
(15:34):
foremost, is knowledgeable,well-trained industry
professionals.
Second, I think this is where alot of companies need to evolve
and where I think we're seeingmore work is firms need to have
an AI framework that defines thegovernance and accountability.
Simply put, you need to makesure the company knows how AI is
being used within the firm,that there's an approval process
(15:56):
, that people aren't justinserting it in the process and
moving forward from there.
So those are what I think aremy priorities.
When you get into the specifics,such as model risk, you know
the model could be, you know,producing an incorrect output,
so you need to have the rightlevel of model validation in
(16:18):
place, stress testing and,honestly, regular retraining and
viewing the results and makingsure that they're meeting your
expectations.
A couple other risks that Ithink are really key data
quality and integrity.
That's been a big deal for me.
I've been in this field forover 34 years.
Data quality is key and thesemodels can analyze huge amounts
(16:43):
of data very quickly.
But you better have regular,rigorous data cleaning make sure
it's valid, make sure it'saccurate, make sure it's not
corrupted those types of things.
And then, when it comes to theuse of AI for operational risk,
you need to make sure there'stransparency, there's audit
trails on what it's doing,there's some type of metrics
(17:04):
that you can use to review theresults to make sure they're
reasonable in an escalationprocess.
They're reasonable in anescalation process.
And the last thing I'll stateeven though there's probably a
bunch of other risks, such ascybersecurity and other things
you need to focus on is changemanagement.
We've all worked in largecompanies and they get content
doing things one way Well.
(17:25):
Ai continues to learn andevolve, so you have to provide
training, ongoing management ofanything that changes in the
process that could affect themodels and involve, you know,
not just the technology team butthe end users and the people
that can actually evaluate theresults.
So there's a lot, I guess, toanswer your question in terms of
(17:46):
how you could mitigate the risk, but those are what I think are
the keys for me.
Pete McAteer (17:52):
Yeah, yeah, thanks
, jeff.
I agree.
Much more to come.
We're just getting in the door,through the door with this
right now.
Carolyn Welshhans (18:03):
Carolyn,
anything to add on the trading
operations.
I mean, I think what Jeff saidwas really thoughtful and for me
it helped clarify, kind of thethought of ai isn't plug and
play.
Obviously we've been talkingabout that um, and I also think
it's not necessarily correct tothink about it as a substitute
for a lot of the, the uses we'vejust been talking about.
(18:24):
It might be an enhancement, itmight make things better, but as
jeff was talking, it was clearkind of each of the steps he was
talking about.
You do still need the people,you still need the knowledge.
You know whether it's in termsof oversight or it's the
training of the model or it's,you know, thinking through what
you really want it to be doingand the knowledge you want to be
(18:45):
imparting to it.
You know it's still apartnership with the people who
have that knowledge and havethose contributions, and I think
that that might be a good wayto think about it.
Again, not a substitute or aplug and play, but an
enhancement, if that is in factwhat it would be for your
business.
Pete McAteer (19:03):
Yeah, the plug and
play piece I see is where it
inserts itself in the middle ofthe analysis and digestion of
large amounts of data tosummarize and pull together
summary data, summaryinformation that can be
leveraged and considered rightand it has to be considered by a
(19:24):
human before it can be put touse.
Libby Hall (19:28):
Thanks for joining
us for this episode of our AI
series with Morgan Lewis.
We hope this conversation gaveyou new insights into how AI is
shaping the future of financialservices.
This podcast series wasrecorded prior to the merger of
Oyster Consulting and ComplianceRisk Concepts.
Be sure to subscribe so youdon't miss upcoming episodes as
we continue exploring the legalcompliance and operational
(19:51):
impacts of AI.
For more information about ourexperts and our services, visit
our website at oysterllccom.