Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Jim Jockle (00:06):
Welcome to Trading
Tomorrow Navigating Trends in
Capital Markets the podcastwhere we deep dive into
technologies reshaping the worldof capital markets.
I'm your host, jim Jockle, aveteran of the finance industry
with a passion for thecomplexities of financial
technologies and market trends.
In each episode, we'll explorethe cutting-edge trends, tools
and strategies driving today'sfinancial landscapes and paving
(00:29):
the way for the future.
With the finance industry at apivotal point, influenced by
groundbreaking innovations, it'smore crucial than ever to
understand how thesetechnological advancements
interact with market dynamics.
To help us navigate this topic,we're joined by , director
(00:58):
of the Oxford man Institute andProfessor of Mathematical
Finance at Oxford University.
Alvaro works at the intersectionof financial economics,
mathematics and data science.
His research spans over severalfields, including algorithmic
and high-frequency trading,market microstructure,
mathematical finance, assetpricing, commodities markets and
financial regulation.
With decades of experienceshaping the financial landscape
(01:19):
through cutting-edgequantitative models, alvaro has
been at the forefront ofinnovations transforming how
markets operate.
In today's episode, we'll divedeep into one of the most
significant technological shiftsfacing the financial industry
today.
From algorithmic trading torisk management, ai reshapes how
firms make decisions, manageportfolios and maintain a
(01:40):
competitive advantage.
We'll explore the implicationsof this technology for quants,
data scientists and financialprofessionals alike.
Well, first and foremost, Iwant to thank you so much for
joining me today.
Alvaro Cartea (01:51):
Well, I'm
delighted to be here and thank
you for having me.
Jim Jockle (01:54):
So let's jump right
in it.
How has AI recently changed thelandscape of quantitative
finance?
Alvaro Cartea (02:00):
Okay, so AI has
been around for decades, right,
so it's not a new thing.
People tend to think this issomething new, but what has
actually changed is thatcomputer power and access to
data these days and that we canprocess the data has completely
changed the way we operate inmarkets, right?
So the question now is whetheryou have the money to buy the
(02:23):
machines, to buy the data andthen to process and act on all
of that information.
Jim Jockle (02:30):
And so, but what
specific AI technologies or
methodologies are having themost significant impact?
Alvaro Cartea (02:36):
you know,
specifically around quant
strategies, so you know I'm anacademic, so I, but I talk to a
lot of people in the industry.
You know I'm an academic, but Italk to a lot of people in the
industry.
But for sure I think deeplearning and reinforcement
learning.
So let me say a couple ofthings about that.
So deep learning, I mean, so onething AI tries to do is mimic
(02:56):
how the brain works, how thebrain makes decisions.
So you use artificial neuralnetworks, so these are layers
that you train and then they dothings for you, like making
decisions.
But when you have many of theselayers, that's what we call it
deep learning.
That's pretty much, I would say, everywhere.
And then reinforcement learningis perhaps the trickiest and
(03:16):
most important one, depending onhow you want to take this
conversation, which is these arelearning algorithms that, by
trial and error, end up trainingthemselves how to best act,
given what the market is doing,what the market has seen in the
(03:36):
past, how the market has reactedto your own actions.
So that's the reinforcementaspect of it is you act and then
you receive information andthen sort of retrain and
recalibrate.
So trial and error will be animportant aspect of RL, which is
reinforcement learning.
Jim Jockle (03:49):
So you're almost
discussing how machines can
almost learn to mimic humanbehaviors.
Is that accurate?
Alvaro Cartea (03:57):
Well, so you know
, let's focus mostly on
financial markets.
And you know, my view is verysimple.
If you start looking at history, how things have been done in
the past, so people, you havepeople in the pit trading, doing
things, and then compute powerand the internet, and all this
thing comes along and at somepoint what you end up doing is
(04:18):
framing the computer or codingthe computer up to do what you
know how to do best, which is totrade.
And you, you know, teach, know,teach the, you know, you code
things up, so all the tricks youknow are implemented and so on.
So it is not that the computeris mimicking the way you act,
but you initially train thecomputer to act in the best way
(04:38):
you think they should be acting.
That's part one of the film.
The second part is where we arenow, which is we have moved on
to algorithms that learn things,or reinforcement learning,
learn things that perhaps younever thought about.
So it could well be that thesedeep learning algorithms or
reinforcement learningalgorithms are doing what you
train them to do, but alsothey're so powerful and they
(05:00):
investigate many avenues thatyou never thought about many
patterns in the data,interconnection between data and
markets and so on, and thenthey might stray away from what
you would have thought was yourown thinking on your own way of
trading.
But that's the beauty of it,that you have all this power to
do what you think you should bedoing.
But the machines will also tryand investigate unexplored
(05:24):
terrains that you hadn't thoughtabout.
Jim Jockle (05:29):
So let's dive in on
that a little bit.
Trading in many ways ispersonal, right.
Traders have their own views ofthe market, their own
experiences, specific strategiesper a particular asset class
right.
So in training these models,how specific can it come to one
person?
Regardless of how things are,you know how the machines are
(05:52):
going to explore the revenuesthat perhaps an individual
wouldn't.
Is there a standard of whichthese models are trained or can
it get that personal you know,to an individual?
Alvaro Cartea (06:03):
Okay.
So to be individual can meanmany things.
It could be one person, atrading desk or a whole firm.
Clearly, if you have your ownviews, the way you design an
algorithm, the way you code itup and the data you feed it to
train it, and then the way youdeploy it and reassess will
really much define how thealggos behave and what they do.
(06:26):
So in that sense, they're verypersonal.
So I develop an Argo to makemarkets, for example, but I know
that there's something veryspecific in this particular
asset class that I'm looking at.
So I do feed the Argo theimportant things that I think
will make this algorithm uniqueand competitive.
Right, this is the importantthing.
(06:47):
So of course, everybody willhave an edge.
Like if you look at cars, allcars are different but they're
all trying to win a race.
They all will have an edge andthey'll have.
Your sort of footprint will bein the way you code it and what
information you use to act.
So, clearly, you set theobjectives and by setting the
(07:07):
objectives, what the Argo mightbe able to give you is that,
again going back to my initialpoint, is that the Argos are so
powerful that can try andachieve the objective which,
let's say, is maximize profits,but will try or explore anything
that's allowable within theblack box that you have trained
and coded up to achieve thatobjective.
(07:28):
So it could be making markets,it could be portfolio decisions,
it could be, you know, tradingcorporate debt, but you know, in
the end the objective is veryimportant.
But the algorithm is somethingI normally say has no moral
compass.
Algorithms are not trained tobehave or misbehave.
Algorithms are trained toachieve an objective under a set
(07:51):
of rules and under some sort ofinformation you feed and the
information it draws out of themarket, and then they run and
try and do what you told them todo make money.
Jim Jockle (08:00):
And now but your
research.
You have raised some concernsabout machine learning and algos
.
Right, you know and you know wespent a lot of time on this
podcast talking about the power,but you know to what extent in
your research you're talkingabout the algos potentially
colluding in the markets and youknow how serious is this threat
(08:20):
.
Could it manifest in real worldtrading scenarios?
You know?
Please share your thoughts here.
Alvaro Cartea (08:27):
Okay.
So first of all, this researchagenda is pretty much what we've
been you know, some people herein the Oxfam Institute and I
have been working on for a fewyears, four or five years.
I spent quite a long part of mycareer looking at algorithms,
but the last four or fives inthe aspects of unintended
consequences of learningalgorithms.
(08:47):
So let me just split this intotwo important things we do and
research we do.
Some is very theoretical andsome is more empirical.
On the theoretical side, wehave asked the question can
algorithms collude, right?
So first of all, we arefocusing on unintended
consequences that may not begood for the market.
(09:08):
This doesn't mean that you knowall algos will collude, but the
first question is can theycollude?
And we show under a verygeneral sort of assumptions,
mostly based in game theory,that indeed algos can learn to
collude.
But what is collusion?
First of all, collusion is thatyou know people coordinate
actions to achieve an outcomethat might not be a competitive
(09:32):
outcome, and an important thingis that they achieve this by a
reward-punishment mechanism.
So if you don't agree, if youand I enter into an agreement,
but you deviate and misbehave,I'm going to punish you.
And just because I punish you.
We sustain the agreement, andthat is collusion.
Can Argos do?
This is the question.
Well, yes, we show that whenArgos are trained to sort of
(09:54):
maximize profits and a few Argosencounter each other in the
market, they're not talking toeach other.
They're not trained to talk toeach other, but the way in which
they learn and the way in whichthey update their beliefs as to
what's going on in the market,the way in which they refine
strategies to maximize profits,means that the Argos enter into
(10:18):
sort of paths that coordinateyou into a collusive outcome or
an outcome which is notcompetitive, that coordinate you
into a collusive outcome or anoutcome which is not competitive
.
And let me elaborate a littlebit more because it's a very
subtle point.
So, again, these are, those arepieces of code that people
wrote, and those pieces of codesay I want to maximize profits.
And these are the set of rules,and within these rules there's
another rule that says well,every time you act, and I see
(10:41):
the outcome of my own actions inthe market and everybody else's
actions, I reassess andrecalibrate the way in which I
optimally behave, and that aloneis where I'll go meet and then
they can coordinate each other,because they're reacting to each
other's actions, they'rereacting to the environment, and
that alone is something thatcan make you go into a path that
(11:04):
will sustain an equilibriumwhich is not a competitive one.
This is something we didn'thave in the past.
Right, because people need toask themselves why now and not
10, 15 years ago.
Well, 10 years, 15, 20 yearsago, you would code up a machine
and the rules were hardwiredand you do this and you see this
, and you see that and do this,and that's it.
Now the rules have a rule tochange behavior, to recalibrate
(11:28):
or tweak parameters in a waythat are searching for the best
possible outcome, and that iswhere the coordination to
collusion might occur.
So that's one research where weshow theoretically this can
happen.
We also show an interestingaspect, which is these machines
again are designed to maximizeprofits, let's say.
(11:48):
And then they learn signals,market signals, which are
important.
You know, we know whenvolatility goes up, something
happens, we know.
If you think of the electronicmarkets, like NASdaq, you say,
well, um, if demand and supplypressure in the electronic book
is important, it signals whereprices might be going to.
So I use that as a signal tomake decisions and everybody
(12:10):
does.
It's kind of a signal that mostpeople in the market will use.
I I cannot think of a marketplayer in equity, for example,
who does not look at the demandand supply pressure in the
electronic books.
But now the Argos know thatthat's an important signal.
So we show that when Argos learnthat there's a signal, that is
(12:32):
important.
And why is it important?
Because it signals somethingabout what may be happening.
But it also triggers people,people's actions, because if you
think it's an important signal,it's very likely that your
neighbor or your competitor alsolooks at the signal to make
decisions.
And then these Argos will learnor will try to rig the signal
to their advantage.
(12:52):
So Argos, although and this isimportant although you do not
train them to manipulate markets, they will eventually find out
that well, if I sort of put fakeorders in the book, I can alter
the demand and supply pressurein a sort of particular way.
So I'm referring to spoofing,and that will generate some
(13:14):
actions out there and then I cantake advantage of that.
So that's another researchagenda where you know it is.
Once you think about it, it isnot a surprise that these Argos
that have no moral compass willtry and give you the best
outcome, even though they haveto rig a signal.
Now, a third research agendathat also complements all of
this is more empirical, thatthey're also complemental, it's
(13:36):
more empirical.
So what we have seen is thatsome of these algos whether
deliberate or not, I just don'tknow but they try and signal who
they are to everybody else orto some market participants.
So let me give you an example.
Imagine you, jim, you're acompany trading.
(13:57):
You're sending orders to thebook to buy or sell assets, but
the volume you choose everysingle time has a particular
ending that the last two digitsis a prime number or the last
two digits is.
You know, always the same, oryou rotate them, but there's a
clear combination that I knoweventually, that I might not
(14:18):
know J by name, but I know thatthis is perhaps coming always
from the same place and I do thesame, and then we can
coordinate actions.
Another way in which signalingseems to be happening is that
the volumes you use and thevolumes I use are so, so large
in the book that wheneversomeone sees a very large limit
(14:39):
order coming into the market,they say this is very likely,
jim.
They don't know who you are.
But they know that all of thoselarge orders seem to always
come from someone that you knowbehaves in a particular way and
they do the same.
And then you you know thesearticles kind of communicate in
that way and what is the outcome?
Goes kind of communicating thatway and what is the outcome?
(15:00):
Well, the outcome could well bethat if you and I know who we
are, then we sort of coordinateinto not sort of cannibalizing
each other or we sort of sniperetail orders but never touch
each other.
So this is something we've seenin the data, right, and it's
under, you know, it's work inprogress, but it's something
that, again, if you are theregulator and you have access to
(15:22):
proprietary data where you cansee all these things, will be an
easy way in which you can startthinking about okay, are Argos
communicating in different ways?
And a lot of this could beinadvertent.
At no point am I saying thatanyone deliberately does these
things.
It may well be that some do.
You know, I don't know today.
The podcast might come out in afew days or a week time, but
(15:44):
you know, all of the offices ofmany, in a very subtle way, will
do this or they're not trainedat all.
But the algos are so powerfulthat they will try, with no
moral compass, to give you thebest deal, to maximize profits
(16:07):
and find ways, which are notcompetitive, to achieve that
outcome.
Jim Jockle (16:14):
You know you use the
word black box and it makes me
wonder, right, you know becauseand I do want to get on to the
regulatory side of this in amoment but how are you beginning
to understand?
You know what transparency areyou seeing within the algos to
get a better understanding ofthat decision-making where you
could potentially see thecollusion?
(16:35):
You know what kind of data areyou looking at in terms of
understanding the maturity ofthe learning process?
Alvaro Cartea (16:43):
Okay.
So, first of all, very fewpeople would have the data to
make statements like the onesI've made, so you would have to
be the regulator of the exchangeto see trader ID to start
isolating Some of it.
You can eyeball Some of it evenwithout trader ID.
You can start seeing some weirdpatterns in the way volumes are
(17:05):
quoted.
So that's one aspect of it.
So, if you have data withtrader ID, you may be able to
explore a little bit more andit's more transparent in that
way for someone who hasvisibility of the data.
Then the other question, whichis on the periphery of this, is
okay, you don't have the data,but what, what things make these
(17:25):
black boxes more prone tocolluding or not colluding?
What makes these black boxes uhyou know more likely to
misbehave?
Right, and?
And for us, misbehavior is toinadvertently do things that
should not happen in thecompetitive market.
Well, if the number of playersin the market is very low, right
(17:45):
, if you have few market playersor few black boxes trading,
then it's more likely that youcoordinate.
If you and I are the main guystrading a particular asset class
or a particular asset within anasset class, and we have these
nice black boxes that have beenvery well oiled and the code is
(18:06):
running well, it is more likelythat you and I end up colluding
than if we have five, 10 peoplelike ourselves.
So the number of players is akey determinant that would allow
these black boxes to collude.
There's a more subtle one thatI don't know if we have time to
get into is when you train theseblack boxes.
(18:27):
So I've said this before.
An important aspect that peopleseem to forget is that all of
these argers are trained.
You use data to train them.
So I use data, historical data,you use historical data for
yours, and then, once youunderstand how to set the ball
rolling.
So these are algorithms thatall of them need initial
conditions to start trading.
(18:47):
You will always pick, so Jimwill pick the ones that in the
back test in the lab gave thebest performance and in my lab
and my sandbox gave me the bestperformance.
But it is very likely that youand I choose starting conditions
or initial conditions that,more often than not, will
(19:08):
coordinate us into collusion.
So, even if you don't havetransparency of what's going on
or data, you know, by the waythese algorithms have been coded
up and the way they have beentrained before deployment that
you can end up in collusiveoutcomes or outcomes which are
not competitive.
An important subtle point that,depending on your background,
(19:30):
is interesting is you may haveoutcomes which are not
competitive.
Okay so, prices are much higherthan they should have been, but
they are not sustained.
Okay so, prices are much higherthan they should have been, but
they are not sustained byreward punishment mechanism, and
that's a more difficult for theregulator.
That would be a more difficultone to attack.
It's like oh, you know, this isan outcome that doesn't say you
know that we ended up coatingvery widespread in the market,
(19:50):
but we're not punishing eachother.
We deviate.
It just happened that it was anequilibrium, that the machines
were happy not to deviate orundercut.
Jim Jockle (19:59):
Now you've been in
discussions with the UK
Financial Contact Authority, theUSSEC.
What role do regulators have inaddressing these types of risks
?
And I think a better questionis are they prepared?
Alvaro Cartea (20:12):
Well, so they're
fully aware, clearly.
Well, so they're fully aware.
Clearly, an important point tomake is that the regulator is
also fully aware of all the goodthings that AI brings to the
table right.
So you know, again, thispodcast, we're concentrating on
the unintended consequences.
We don't like, but everybody'saware of all of the good things
that might, you know, may comewith AI and they're happy with
(20:36):
that.
So one of the problems theyface is you don't want to sort
of stop the benefits from thisby looking for the unintended
consequences.
So how do they attack this?
Well, they are aware, they aretrying to understand the problem
.
They talk to people like myselfand they are trying their best,
(20:57):
best, given the resources theyhave.
You know we need to bear inmind that the regulator I think
that the manpower they have is,you know, is not the same as the
financial industry combined.
So it's a fight where resourcesare not there.
But the point about collusion ormanipulation has always been on
(21:18):
their agenda.
So they're looking into thisand finding a way in which you
can engage industry to look atthis in a very constructive way.
Those are the conversationsI've had with them and, you know
, I think the industry is quitehappy to engage because, as I
said, many of these outcomes areinadvertent collusion outcomes.
(21:41):
So nobody wants to be in courtbeing told you have an algo that
colluded and you'll say, well,I never trained it to do that.
But then the responsibilitylies where, in the end, you have
to be made aware that youralgos can inadvertently
manipulate the market or colludethem.
So the responsibility will haveto fall somewhere and that's
(22:01):
for the regulator to decide.
Jim Jockle (22:03):
I want to ask you a
question that would be very,
very unpopular with mycolleagues, because I'm
surrounded by them, because I'msurrounded by them, but I'm
going to ask anyway can AI fullyreplace human quants, or how do
you see the two workingtogether?
As I said, that wouldn't justpass my office and I'm trying to
(22:29):
have them not read my lips.
Alvaro Cartea (22:31):
Well, you know
that's a great question, right?
Because you know, as anacademic, you also say, well,
yeah, they're going to replaceus.
You know Because, as anacademic, you also say, well,
yeah, they're going to replaceus.
So my take on this is well,actually, no, if anything, it's
a tool that's going to make usbetter.
Maybe the number of people youneed to make decisions shrinks.
It could well be, but I was at aconference last week here in
(22:52):
Oxford, organized by the MannGroup, and one of the top cons,
richard Barkley and I'm sure hewouldn't mind me saying his name
had a very interesting talkwhere he says okay, look at the
following when people used tofly planes pilots, I don't know
20, 30, 40 years ago you knowthey were, you know, really
(23:13):
doing stuff for a good number ofhours on a transatlantic flight
.
As time has evolved, the timepilots spend is very, very small
and small and small, but youstill meet the pilot.
And then the question is okay,we will not be replaced.
Our role may be, you know, as aproportion of time you spend on
this, our role will be smallerand smaller and smaller.
However, it will be crucial,especially crucial at times of
(23:36):
duress, okay, and you will neverbe able to encode in an
algorithm.
If something goes dramaticallywrong, someone has to intervene,
and I think the humanintervention aspect is quite an
important one, and that's thepoint that was being made by
Richard Barclay back then iswhen shall we intervene?
(23:57):
If you can code it up, it wasalready coded in the algo, so it
could well be that we havefewer people spending fewer
hours, but one people have towrite the codes, right?
Or unless you expect the robotsto also write the code, which
would be kind of an interestingthing to see how it maps out but
someone has to be supervisingin case it goes completely mad.
(24:20):
So, yeah, some people might beout of a job, but I think it
will be a natural and gradualthing as people retire.
You know, imagine back in theday when you didn't have Excel
to do accountancy or do thenumbers.
Right now, you know.
So the answer is well, I thinkthere's space for us.
Jim Jockle (24:36):
My colleagues are
going to be very happy to hear
that.
So, in terms of let's look tothe future, what kind of skills
should quants have, or aspiringquants and financial
professionals have, around AI,you know, to be successful today
, or even enter the industry,you know, as they're emerging
(24:58):
from academics.
Alvaro Cartea (24:59):
Yeah, I think
it's a wonderful question
because it's you know, I'malways in favor of helping
people who are making decisions,especially at a very young age.
So when I was trained as afinancial mathematician, let's
say, the beauty of what you weredoing relied mostly on
stochastic calculus, stochasticanalysis, partial differential
equations.
So I'm talking 25 years ago andthe elegance of producing
(25:24):
equations that would have closewell, the problems that you
could solve in close form.
So you had an equation and usedit very well.
That was wonderful and that'show the industry was run many
years ago.
As time has evolved, we havemoved away quite a bit from that
elegance.
We have moved away from some ofthose techniques more into data
(25:45):
science, getting your handsdirty with the data, roll your
sleeves up and learn more aboutstatistics, learn more about
neural networks and machinelearning.
So the skill set has changed.
I don't think the brain powerthat you need is different, so
you just need to train itdifferently.
And you can see it in all theprograms in mathematical finance
(26:07):
there in New York, in Chicago,here in Oxford, in London, there
in New York, in Chicago, herein Oxford, in London there's
been a great shift on theemphasis of what's done and
statistics has become kind ofmore of a dominant aspect in all
of these courses.
Now, what is a trade-off?
Before we had elegance andinterpretability.
So before, when you wrote allthese equations down there, you
(26:30):
had closed-form solutions.
You knew how to interpretresults.
The trade-off is that now youmight be doing things but you
can't really interpret very wellwhat the black of either one,
and then the challenge will beokay.
(26:53):
Now you've learned all thesestatistics and all of this
machine learning and all of thisAI.
Can you tell me why are thingshappening?
That's a question that we'llface, that we could answer in
our time, because the equationswould speak to you, and now,
with the black boxes and so manyvariables and these deep
(27:15):
learnings and multi-layers andso on, it will be much more
difficult.
So that's gradually changingand I think the difficulty is on
us to retrain and relearn orlearn things from scratch that
we never thought were asimportant.
Jim Jockle (27:32):
But you talk about
the elegance and I think of,
internally, our own roadmap.
We're always makingenhancements to local Stochastic
VAL models or Bergomi models.
There's always an extensionrequired from the market as it
relates to that elegance.
But is the elegance at risk ofbeing lost in a shifting
(27:55):
syllabus?
Alvaro Cartea (27:56):
No, I don't think
so.
I think anybody who's going tobe learning about this will be
taught the Black-Scholesequation and how beautiful it is
, the hedging argument and howit connects to the heat equation
and the closed-form solution,because that gives you a
stepping stone to understand howthings work.
It's like a map when you haveequations.
It's like having a map that cantake you from A to B and you
(28:18):
know why you got there or if youtook longer or you took shorter
or if you deviated.
You know what's going on.
When you start extending thesemodels and plugging in kind of
more powerful ways of learningwhat the market is doing and how
to extend the model in waysthat you need to do to remain
competitive in the market andyou go into more of the machine
(28:41):
learning aspects of them, thenyou lose that sort of elegance
in terms of not elegance, butthe roadmap is not as clear.
You know you go from A to B, butit's very difficult to say,
well, it's because I took ashortcut or because this thing
helped me up the hill.
Some things will be much moredifficult or you will not know
(29:02):
how your results would change ifsomething slightly different
happened that you hadn't takeninto account, whereas before you
could, but of course, beforeyou were in a straight jacket,
before you had a map that putyou in a straight jacket and it
was very difficult to stray awayand that's why it was easy to
say well, I know what's going on.
The moment you have you, canmeander around the world in
(29:26):
different ways.
Then it becomes a lot moredifficult to pinpoint exactly
what's causing what or what isconnected to what.
Jim Jockle (29:33):
A final question I
have for you is you know we've
talked, you know, specificallyaround trading, around risk From
a systemic risk perspective.
You know, what role do you seeAI being used in terms of
navigating future market crisesor unexpected black swan events
(29:53):
or things of that nature?
Alvaro Cartea (29:56):
That's a great
question and I get a lot of
people asking me that so clearlythere's a lot of effort being
put into saying is the marketabout to kind of break down?
Is the market going intodisarray, about to kind of break
down?
Is the market going intodisarray?
Can I use all of this data andcompute power to at least know
that I'm entering into somethingwhich is, and people are
(30:18):
working at that, and hopefullythat will allow us to sort of
soften a little bit this crisis.
Soften a little bit, you knowshocks that are coming and the
way they sort of rip through themarket.
So that's the positive andoptimistic aspect of it.
But if we go back to my initialpoint about some of these boxes
(30:41):
, learn how things areinterconnected, you may also
think that these crashes orweird things whether they're
short-lived or long-lived, itdoesn't matter but that these
weird things, they may also becaused by some of these black
boxes.
So it may well be that we endup seeing black swans, pink
swans and blue swans, some ofthem sort of painted by boxes
(31:04):
that found a way to sort ofshift the market or give it a
punch in a way that the no moralcompass black box thinks is the
best way to achieve theobjective you set to the box.
Jim Jockle (31:19):
Well, this has
definitely been a very colorful
conversation.
So sadly, we've reached thefinal question of the podcast.
We call it the trend drop.
It's like a desert islandquestion.
So, if excuse, so, if you couldonly watch or track one trend
in AI and capital markets, whatwould it be?
(31:39):
I?
Alvaro Cartea (31:39):
would love to see
how these deep learning and
reinforcement models impact themarket, and also language models
.
How will the markets becomemore efficient, less efficient?
And I don't think we're goingto you know.
Within five, ten years' time,we should know how the film
(32:01):
unraveled.
Jim Jockle (32:02):
Well, I want to
thank you so much for your time
Fascinating conversation.
Thank you so much.
Alvaro Cartea (32:08):
Well, great
talking to you, Jim.
I look forward to talking toyou again in the future.
Jim Jockle (32:11):
Absolutely.
Thanks so much for listening totoday's episode and if you're
enjoying Trading Tomorrow,navigating trends in capital
(32:45):
markets, be sure to like,subscribe and share and we'll
see you next time.