All Episodes

November 21, 2024 • 27 mins

In this episode of Trading Tomorrow, we explore the groundbreaking ways artificial intelligence is reshaping finance with insights from Fawaz Chaudhry, the Head of Equities for Fulcrum Asset Management. Fawaz provides a rare look at how AI tools are harnessed to interpret complex data, streamline coding, and improve reporting in finance. We delve into the future of AI for pattern recognition in images and video, and Fawaz shares the impact of hardware advances on AI's capabilities in finance. Tune in to discover how AI affects productivity, market efficiency, and the future of portfolio construction.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:06):
Welcome to Trading Tomorrow Navigating Trends in
Capital Markets the podcastwhere we deep dive into
technologies reshaping the worldof capital markets.
I'm your host, jim Jockle, aveteran of the finance industry
with a passion for thecomplexities of financial
technologies and market trends.
In each episode, we'll explorethe cutting-edge trends, tools
and strategies driving today'sfinancial landscapes and paving

(00:29):
the way for the future.
With the finance industry at apivotal point, influenced by
groundbreaking innovations, it'smore crucial than ever to
understand how thesetechnological advancements
interact with market dynamics.
Today, we're discussingartificial intelligence and its

(00:52):
evolving role in finance,particularly within equity
markets.
Ai is reshaping everything fromnow casting to portfolio
management and beyond, but howmuch of this revolution is
transformative and how much isoverhyped?
To explore this, we're thrilledto have Fawaz Chowdhury, the
head of equities at FulcrumAsset Management, with us today.
With over $8 billion in assetsunder management, fulcrum has

(01:14):
been at the forefront ofintegrating advanced AI tools
into financial analysis, helpingto refine investment strategies
and even collaborating withcentral banks.
Fawaz Chowdhury joined Fulcrumin 2017 as head of equities and
is responsible for building thefirm's equity capability.
A key member of the broaderinvestment team, fawaz is also

(01:34):
the lead portfolio manager forthe thematic and climate
solution strategies.
Prior to joining Fulcrum, fawazspent over 15 years developing
his long-term thematicinvestment approach, including
at Hadron Capital and MoreCapital.
Welcome to the podcast, forwaz.
Thank you so much for having me, so you know.
Let's just start with yourbasic pulse on AI tools within

(01:54):
the industry.
What have you noticed in termsof usage?

Speaker 2 (01:58):
Well, I mean, the AI tools have been used to create
algorithms for trading blackboxes, but in a very hesitant
way, because you don't actuallyknow what the black box is
spitting out.
There's dangers ofhallucination, as they call it,
and the results have been mixed.
So on that sense, I would sayprogress is still limited and

(02:18):
halting in general from industry.
What I have observed more interms of gathering and
interpreting data, in terms ofgathering and interpreting data
in terms of where the AI is bestsuited is a language.
So, in terms of gatheringinvestor reports, gathering
knowledge from transcripts,gathering generally lots of

(02:39):
reading, all of the transcriptsthat are written in language
that can be summarized puttogether, I think that's where
we've seen a lot of progress nowand it's continuing at this
stage and at Fulcrum, you'reusing AI to write a little bit
of code.

Speaker 1 (02:56):
Perhaps you can explain that a little bit
further.

Speaker 2 (02:58):
Yeah, I mean code goes asset management.
In every industry you writecode now and obviously so do in
asset management.
We write code for tools formonitoring, for risk, for
Implementing trade there,obviously systematic trading and
even in discretionary trading.
You also use to risk managementtools.
So everyone is well aware ofthe adoption of large language

(03:22):
models for increasingproductivity, for producing very
high degree of code.
We see that the data as aninvestor in ai companies and,
because I'm a equity investor,we see one of the best and early
adopters of productivity hasbeen in code writing and you see
it in the github usage and so,yes, so do we.

(03:44):
In my team I have analysts whoare writing code and tools and
scripts and they are moreproductive than they used to be.
So I think this is a generalusage across all producing
software and that's a trend thatwill continue.

Speaker 1 (03:58):
So you know in terms of you know your opinion, your
experience, whether it's largelanguage models, writing for
code, you know.
If you had to create two veryclear buckets at this moment in
time, because everything ischanging regularly, you know
what would you say AI is mostuseful for and what is it not
useful for.

Speaker 2 (04:16):
I mean, the most useful part is obvious, which is
language, and it's just thatproducing written reports is,
while it is productive, like wealso I mentioned uh gathering,
reading transcripts, etc.
But a low level work wouldcould be, for example, writing

(04:38):
our monthly reports and etc.
So what we used to spend timeon uh perfecting the language
gets fitting it in a certainfact format in the fact sheet.
That is still time saved if thewhile we our first rough draft
can then be perfected in minutes, not hours or even a day.

(04:58):
So even we are using it.
But language in general, thehigh level work in language is
coding.
You are paying graduates fromstanford two hundred thousand
dollars a year, right?
now in silicon valley to writecode, or or so you are actually
saying, hey, this language work,which is what coding is.

(05:19):
It's a software language iswhere you see some of the
highest productivity.
But language in general becausethis is why they call large
language models, and AI is goodat identifying patterns and
trends and then transferringoutput to match that.
And then because we have somuch written language in terms
of the entire web of writtenlanguage, ai is able to do that.

(05:41):
In the future, ai will be doingother pattern recognitions.
They're doing that now forimage creation and video.
It's not as good yet aslanguage, but it will keep
getting there identifyingpatterns for video and image,
but it will keep going.
Anything that you have toidentify patterns and reproduce
AI is going to be very good at.

Speaker 1 (06:01):
We've all seen AI generating images of dogs with
four legs or five legs.

Speaker 2 (06:06):
So it's getting there , yeah it's not as good as
language, but you will not seethat, so let me explain.
You do not currently notice howmuch for example, all the
journalists are using AI towrite an article, for example,
from the first draft to thefinal product, how much?
How they could write an articlenow, maybe in two days, not a

(06:27):
week, because they don't have toperfect it the same way,
because the AI can do it forthem.
So you don't see that, becausethe end product is that good
that you don't see that.
So the image and video are notthere yet, but it will get there
.
Where you don't see, you won'tbe like seeing how much AI has
been used because the endproduct is that good.

Speaker 1 (06:48):
Well, you know, previously you've mentioned that
AI and macro trading is veryoverblown.
You know.
Perhaps you can elaborate onwhy you believe its impact has
been limited so far in this way.

Speaker 2 (07:01):
Well, macro trading I mean.
The more it's about data, thenagain, the point is, data can
break.
Data and macro trading, by itsinherent nature, is more short
term, so you are reactingquickly to market events.
And then, whether it'sinflation or economic data or

(07:21):
etc.
Or the NFP report, employmentdata, so you're reacting quickly
to it.
So, long-term trends andrecognizing the pattern in
long-term trends and repeatingthem becomes a bit harder when
everything is more short-termand more unique.
And then once you get aspurious data point like
hallucinate, the AI model canhallucinate, you could put the

(07:43):
trade in the wrong direction,and et cetera.
So it's a bit harder.
In one area that I've seen,macro, in which AI is now
starting to be used, is againcoming to language reading the
language of, for example, allFOMC members or all ECB members
or all within ECB.
There's all these differentcentral banks at Spain, italy,

(08:04):
et cetera.
All of them are putting outsome transcripts.
No one has time to read everySpanish central bank's interview
launch transcript, but the AIcan read all of it and see
whether it's are they leaningmore hawkish or leaning more
dovish.
Again, it comes back tolanguage and how the language is
used.
So language is where we canstill see, and obviously Fed

(08:26):
speeches matter.
So that's where language isimportant, that's where it can
help.
Macro Data lessons.

Speaker 1 (08:32):
You know you make a really interesting point as it
relates to, like, fed speeches.
But you know, I would alsoargue sometimes it's what's
either not said or the way it'ssaid.
In that nuance I've seen movemarkets as well.
I mean, how are you?
You know it's wonderful to havethe interpretation of, you know

(08:52):
, all the central banks, but atthe same time, how are you
interpreting the nuance on thefront lines?

Speaker 2 (08:59):
Well, ai is actually decent at it.
So if you ask AI, is it moredovish or more hawkish than the
same members or the previous 10speeches?
It can pick up on what he isnot saying and hence, is leaning
more dovish or leaning morehawkish.
As long as it pertains tolanguage, ai is actually decent

(09:22):
at it.
That's what I'm trying to say,and you could see all of the
FOMC members are out theremaking speeches.
Every week, you get 12 of these, and so you can actually now
even and all the banks are nowdoing it Bloomberg is now doing
it Everyone is now doing naturallanguage processing using AI
and giving you a sense of eachof the members' position,

(09:46):
turning dovish or hawkish, basedon their recent speeches
compared to their own previousspeeches.
So I think, in terms oflanguage, ai is quite advanced.
I would not say that you canpick up some nuance and the AI
is missing that nuance.

Speaker 1 (10:02):
Wow.
So you know.
I guess another kind offollow-up question in that
regard is you know what kind ofskills you know are the teams
developing now on the frontlines in terms of prompt
engineering to be able toextract that nuance from these
models?

Speaker 2 (10:22):
Well, what the example I just gave, and in some
sense the most importantexample, which is Fed, and
whether the FOMC is leaning,dovish or hawkish, because that
moves markets, it's the promptengineering required lean,
dovish or lean hawkish and toprompt it.
What those words mean, ai caneven understand what those words

(10:45):
mean.
So actually it doesn't take toomuch.
Feed them the speeches, ai willbe able to understand what you
mean by dovish and hawkishBecause, again, that's part of
language.
So it's actually just aboutgetting the fiddling with it and
getting it right.
I would say work is being doneand they are expanding it to be

(11:08):
able to get the nuance correctlyand put it on some scale and et
cetera, how, how, how, and soyou can actually get some useful
information out of it.
So that's where the promptengineering put it on a, assign
a number to it and put it, chartit over time, so the dovishness
and hawkiness can be put aquantity on that and then chart

(11:32):
it over time in a graph andstuff like that.
That's the kind of promptengineering you're referring to.
That's what I've seen producedby these various banks and
Bloomberg and others.

Speaker 1 (11:42):
And has AI reshaped your views on optimal portfolio
construction?
And you know?
If yes, why?

Speaker 2 (11:48):
I would say optimal portfolio construction theory is
not being challenged by AI.
I think AI can get news intothe market quicker, get it
priced quicker.
It's a continuation of thepattern that we have seen over
the last two decades.

(12:09):
It's not that machine learningwasn't happening before.
What was machine learning wecall AI now, I would say in
terms of academic rigor and nowthe portfolio construction,
academic rigor has not beenchallenged.
And the balanced portfolio ofoptimal frontier, etc.
All of that is not beingchallenged necessarily by ai.

(12:31):
It's just that ai is justanother tool which recognizes
patents and gets thatinformation.
Even, let's say, fomc I'mgiving the example of now if
someone starts running astrategy that, as the Fed, is
leaning more dovish, sell dollar.
If it's leaning hawkish, buydollar.
Or I mean if people are runningAI, live AI, as the the FOMC

(12:53):
member is making a live speech,ai is assigning a quantity on
the dovishness and hawkishnesslive over time it is translating
into the market quicker.
So all I'm saying is what itdoes is it takes more of the
information and gets that pricedin into security prices quicker
, makes markets more efficient.

(13:13):
So, to the extent that theacademic rigor required, an
efficient market hypothesis,yeah, is just actually making
market moves more toward theacademic rigor of efficient
market hypothesis, not themarkets being semi-efficient or
inefficient.
Ai is making markets moreefficient.

Speaker 1 (13:33):
And you've said that Fulcrum avoids black box AI
systems due to fiduciaryresponsibilities.
Obviously, you know EUregulation that's coming out is,
you know, very concerned aroundthe black box concepts with AI
models.
Perhaps you can explain a bitfurther how your team has come
to this decision and isimplementing that.

Speaker 2 (13:53):
Well, I mean in general, about any strategy that
you put forward, systematic youshould have a very high degree
of understanding of what thosemodels are, how those models are
constructed, how they'reimplemented, how they react to
different market conditions.
The more black box it is, theless of a risk you can assign to

(14:16):
it, less of the allocation youcan do to it.
And neural networks, by theirdesign, are constructed to be
black box.
In effect, it's very difficultto understand how, because of
the number of nodes and thenumber of hidden layers and the
way it propagates.
And just as a bit of background, I did my bachelor's and

(14:39):
master's at MIT in computerscience, with a master's thesis
in neural networks at MIT AI Lab.
So this is the same thing wewere doing even two decades ago,
and I understand that youcannot actually determine what
the driver behind each of thedecision making was.
It's a black box, it spits itout.

(14:59):
Behind each of the decisionmaking was.
It's a black box.
It spits it out and there issome something it saw in the
pattern that and it could havebeen a spurious correlation that
it assigned a lot of weight toand then ultimately gave you an
answer, which is what we callhallucination and etc.
It.
It is some pattern that it sawthat it didn't really have a
causality behind it andultimately we basically say it's

(15:20):
hallucinating, but ultimatelyyou cannot then run a large
amount of risk and large amountof your assets dedicated to such
a strategy which canhallucinate, but you cannot
explain why the decision it did,why you cannot.
So even other asset managersare in the same kind of
conundrum and then hence theydon't.

Speaker 1 (15:42):
What is your view on the future of AI and coding and
automating complex financialprocesses?

Speaker 2 (15:48):
I am extremely bullish and the largest risk in
my portfolio is AI-relatedbusinesses and has been for a
year and a half and continues tobe, despite the July-August
pullback in these stocks.
I think adoption rate is goingto surprise everyone to the
upside and will continue to beextremely robust, and we see at

(16:10):
the moment the market beingsupply constrained with not
enough hardware available.
I gave you a very simpleexample that I mean, do we
really need to pay people somuch for writing code when it's
just language?
Yes, writing code has a highamount of value add, but it's
still cheaper to buy a graphicscard.
So it's going to be the casewhere the code will be written

(16:36):
by the hardware, and I think weare at a seminal moment and
these kind of pivots happen inthe technology hardware.
And I think we are at a seminalmoment and these kind of pivots
happen in the technologylandscape.

Speaker 1 (16:49):
And we're about to put the silicon back in Silicon
Valley, basically.
So you know, as someone who'sspent time at MIT made a
decision as it related totechnology 20 years ago.
Your words, not mine.
I'm not trying to age you here,but I guess you know.
What would your advice to youngpeople be as they look at their
education?
Yes, we might be paying 200Kper programmer now, but if these

(17:13):
skills are being automated,perhaps how should younger
people be thinking about theevolution of the industry?

Speaker 2 (17:20):
Yeah, one thing, for example, in pattern recognition,
something like radiology.
Even 10 years ago everyone wastelling me it's going to die.
And we're now at that pointthat AI can do radiology and
detect stuff that doctors can't,because it sees patterns across
them.
Having been trained on millionsof x-rays, it can do better.

(17:42):
We are going to anything thatthat ai will threaten.
Those jobs will be at risk.
And I am saying software,writing, coding is something
that will become commoditized ormuch cheaper in the future,
which means software itself willbecome much cheaper in the
future and more of the value inthe tech stack will go towards

(18:04):
hardware, not software.
So software has already eateneveryone's lunch.
Those predictions from 15 yearsago have come true for Mark
Henderson, and now it's time togive some back.
Software ate too much.
We're getting indigestion fromsoftware.
Software ain't too much.
We're getting indigestion fromsoftware.

(18:25):
So it's I would say.
I mentioned MIT.
Mit actually has one departmentfor which they call Department
of Electrical Engineering andComputer Science.
So even my bachelor's and mymaster's degrees were called.
In Elect engineering andcomputer science they say course
six.
So they think of electricalengineering and computer science

(18:46):
as the same department, whichacross a spectrum.
So either you're more inhardware and transistors and the
silicon and doing hardwarelevel coding, or you're more in
the software which is an upperlayers, like the base layer
versus a operating system layerversus this app layer and etc.
So you keep going upper or youkeep going lower base layer.

(19:08):
So we are now in the worldwhere we're going back to the
chip.
Now you have to produce thebetter silicon so then we can
get more productivity out of it.
The large language models are,in my opinion, getting
commoditized pretty quick.
Meta's model Lama is opensource, and others as well, and

(19:28):
the way they make them better isadding more layers, adding more
nodes, basically more hardware.
All of which requires morehardware to train them more
hardware to run the inference onthem.
So everyone is desperate to gettheir hands on more hardware.
So the software better AI modelis basically just get your

(19:49):
hands on better hardware.
So, yeah, get into hardware.
That's my advice to the youngkids.

Speaker 1 (19:56):
You make me think you know going back.
Maybe you know let's call iteight years.
You know back when NVIDIA cameout with CUDA and you started
having all these debates around.
You know C++ and you knowgetting and moving over to GPU
versus CPU.
It seems now, if AI was whereit is today, back then maybe we

(20:19):
would have seen even more rapidmovement and sophistication
around GPUs.

Speaker 2 (20:24):
Yeah, I mean, ultimately market did not invest
enough in it.
Okay, we had Google who decidedto develop its own accelerator
called Tensor Processing Unit,tpu.
So Google did a lot ofinvestment into it, which kept
NVIDIA honest.
But AMD fell behind with theircompetitor ATI.

(20:49):
They bought ATI.
It was NVIDIA and ATI beforeand AMD fell behind.
It was more interested in theserver market, competing with
Intel, which they've beenwinning on, and let this
accelerator market go.
And there were not enoughpeople trying to keep NVIDIA
honest.
Nvidia kept on producing on acertain speed, but if people
appreciated that, hey, youshould be investing more into an

(21:11):
accelerator technology.
If Microsoft or Amazon weredoing more, it would have
spurred more innovation in itand we could have had better and
cheaper accelerators nowalready, and we would already be
at a pace where we're notpaying software programmers so
much because we would be gettingcode produced more cheaper
already.
So anyway, it's, it's allhappened.
It took a breakthroughtechnology like chat, gpt to

(21:35):
come along, to get wake peopleup to the potential, and once
you get the breakthroughtechnology, we're're on that
path now no stopping.

Speaker 1 (21:42):
Well, as a CMO of a software company, I'm a little
nervous by this conversation.
So you know.
Lastly, you know there's a lotof excitement around AI
infrastructure companies, as youknow potential market winners,
you know.
What role do you think AIinfrastructure will play in the
financial sector and how shouldinvestors position themselves in
this space?

Speaker 2 (22:01):
I mean investors should be not fighting the trend
, in my opinion at all.
I think when I look at everysoftware company right now
Salesforcecom, they're coming upwith their agent AI agent OK,
sounds like they're buying someNVIDIA cards.
Or ServiceNow, they've got any.
Sounds like they're buying someNVIDIA.

(22:23):
Every software company is nowboasting how they have an AI
version of their software.
They are all becoming moreCapEx intensive.
That's what they're saying.
Read between the lines they'reall buying hardware.
There's they.
They are all going to.
They are going to be companiesthat used to be very capex light

(22:44):
.
They used to hire programmersto create a new feature on their
software and charge you morefor it.
Now they're all they're doingis buying more cards.
They're all buying hardware.
When they're offering you AIversions of their software, all
they're saying is they're outthere trying to get their hands
on some hardware.
So ultimately, all of theseguys are going to become more

(23:07):
CapEx intensive.
More CapEx intensive businesseshave lower returns, not
negative.
It is still above their cost ofcapital.
They are right to do this.
If they don't, someone elsewill and they will fall behind.
It just means where the valueis going is towards the hardware
guys.
The value is moving towards thehardware guys, the CapEx end.
So guys like NVIDIA, guys likeArista Networks, broadcom, these

(23:32):
are the guys like Micron, highBandwidth Memory.
These are the guys who aregoing to now be taking their
piece of and obviously,international markets, asml,
tsmc.
These are the champions.
They're going out there.
Sk Hynix in Korea is a highbandwidth memory producer.
These guys are now creating.

(23:54):
We have went through a 30-yearhardware down cycle where
basically the whole hard becauseof Moore's law, there was so
much productivity gain inhardware that hardware got
commoditized.
You got this.
You got better and betterhardware for cheaper and cheaper
and you got used to it.
And we went from 20 memorymakers to three and we went from
10 leading edge fabs to oneleading edge fab, tsmc.

(24:16):
We went from 20 graphics cardsmaker to one.
So the point is the hardwareindustry consolidated because it
had to, and now we need morehardware.
They have pricing power andthey're coming back.
And yeah, and you can nowproduce a software competitor
like that by getting someaccelerator and telling it to

(24:37):
write a code.
Write me the forcecom platform.
It will produce billions oflines of code and reproduce it
for you.
Wow.

Speaker 1 (24:45):
You know it's funny.
I think of some of the largercompanies that have taken hits
from the market, but I neverthought about it in the context
of that.
They're becoming too CapExintensive at this point, but
we've seen that play out withthe Facebooks and others over
the past couple of months.

(25:05):
That's a really interesting wayof looking at it.

Speaker 2 (25:07):
But they are CapEx.
They will become more CapExintensive and CapEx intensive
businesses will have lowerreturns.
Doesn't mean negative.
Doesn't mean it's less than thecost of capital.
They are still going to createvalue for the business by doing
it, but in essence it's a freecash flow of all the software
businesses being transferred tothe hardware.

Speaker 1 (25:27):
Wow, so sadly we've made it to the final question of
this podcast and we call it thetrend drop.
It's like a desert islandquestion, and if you could only
watch or track one trend in AIand finance, what would it be?

Speaker 2 (25:41):
The CapEx of the mega cap tech because they are the
ones reselling.
Microsoft is a reseller of AIinfrastructure.
It is building data centers,buying nuclear power capacity,
buying NVIDIA graphics card andhopper system and reselling
those flops.
Aws is reselling those flopsGCP, google Cloud.

(26:02):
So the capex is increasing.
Clearly someone is buying them,obviously and they are.
And Microsoft.
They came out and said theircapacity constraint, supply
constraint, people, there'sinfinite, infinite demand.
Everyone wants more of it.
So the capex of these mega captech, as long as that's the
trend I'm watching, that isincreasing, will continue to

(26:23):
increase.
That's the revenues of all myai companies that I own.
That tells me that there'sdemand, hence they're doing the
capex.
So that's the one to watch.
And if that rolls over, thensomething is wrong because they
don't see the demand on theother side and hence they're not
doing it.
And that would mean the AItrade, all the AI picks and
shovels guys, their revenues aregoing to roll over the whole

(26:44):
equity market could roll overfrom it.

Speaker 1 (26:45):
So it's the one thing to watch, very insightful, and
I want to thank you so much foryour time today.
What a great conversation.
Thank you for having me.
I appreciate it.
Conversation Thank you forhaving me, I appreciate it.
Thanks so much for listening totoday's episode and if you're
enjoying Trading Tomorrow,navigating trends and capital
markets, be sure to like,subscribe and share, and we'll

(27:16):
see you next time.
Advertise With Us

Popular Podcasts

On Purpose with Jay Shetty

On Purpose with Jay Shetty

I’m Jay Shetty host of On Purpose the worlds #1 Mental Health podcast and I’m so grateful you found us. I started this podcast 5 years ago to invite you into conversations and workshops that are designed to help make you happier, healthier and more healed. I believe that when you (yes you) feel seen, heard and understood you’re able to deal with relationship struggles, work challenges and life’s ups and downs with more ease and grace. I interview experts, celebrities, thought leaders and athletes so that we can grow our mindset, build better habits and uncover a side of them we’ve never seen before. New episodes every Monday and Friday. Your support means the world to me and I don’t take it for granted — click the follow button and leave a review to help us spread the love with On Purpose. I can’t wait for you to listen to your first or 500th episode!

24/7 News: The Latest

24/7 News: The Latest

The latest news in 4 minutes updated every hour, every day.

Crime Junkie

Crime Junkie

Does hearing about a true crime case always leave you scouring the internet for the truth behind the story? Dive into your next mystery with Crime Junkie. Every Monday, join your host Ashley Flowers as she unravels all the details of infamous and underreported true crime cases with her best friend Brit Prawat. From cold cases to missing persons and heroes in our community who seek justice, Crime Junkie is your destination for theories and stories you won’t hear anywhere else. Whether you're a seasoned true crime enthusiast or new to the genre, you'll find yourself on the edge of your seat awaiting a new episode every Monday. If you can never get enough true crime... Congratulations, you’ve found your people. Follow to join a community of Crime Junkies! Crime Junkie is presented by audiochuck Media Company.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.