Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:02):
Bloomberg Audio Studios, Podcasts, Radio News.
Speaker 2 (00:18):
Hello and welcome to another episode of the Odd Lots Podcast.
Speaker 3 (00:21):
I'm Jill Wisenthal and I'm Tracy Alloway.
Speaker 2 (00:23):
Tracy, I've always had this idea for the podcast, or
a thing that I've wanted to do. Okay, conceptually with
podcasts is schedule every guest for two interviewers. So you
have the opening interview and you ask a bunch of
questions and then it's, oh God, I really wish I
had followed up on that. I had more. I was
just starting to sort of get my head around this thing.
Speaker 1 (00:43):
Now.
Speaker 2 (00:43):
I could have asked the good questions and then like,
have the person come back next week. Also, the audience complains,
I wish it as that and then fill in all
those gaps that had been inspired by the previous conversation.
Speaker 3 (00:54):
I don't think it's a bad idea. I think it
would double the number of episodes that we put out.
But sure there are topics that come up, usually things
that were just kind of new to and we're trying
to learn about specifically technical things, and one of those
has to.
Speaker 2 (01:09):
Be AI, right, Ai? And also, you know, I really
had a great time. I guess last month we were
in Chicago. Yeah, we talked to a bunch of different
it was like got trading related trip. We interviewed Don Wilson,
we interviewed the head of the CMME. We had some
other chats. So they're all about the world of trading.
When it comes to trading, it's like, you know, we
talked to long term investors, portfolio managers and daomas. We
(01:31):
talked to some people in the hedge fund space who
like maybe have a holding period of several weeks or whatever.
I actually really want to learn more about the trading
like these people who have like a holding time of
one second or something like that, because that's where a
lot of the tech and a lot of the actual
like action is and how that world makes money and
how they actually deployed technology is very interesting, but still
something I don't have my handle.
Speaker 3 (01:52):
On, well, the practical application, right, and also the culture
of AI on Wall Street. I find that really interesting
because I remember, I guess it was like more than
a decade ago, but remember Lloyd Blank find saying that
Golden Sachs is a technology. Yeah, and all these bank
CEOs saying we're going to install pingpong tables to get
all the coders, and now I see ads at trading
(02:12):
firms and it's like, we have a data center full
of B two hundreds, or we have a data center
full of G three hundreds. Come work for us.
Speaker 2 (02:19):
The only thing besides all their tech that I know
is like every time you read a profile of any
trading company, like and they love to play backcam and
they love to play all the article the chessboards are out,
they could be seen playing chess over launch, et cetera.
I get it, Okay, they like us, they like games,
they like whatever, let's move the ball for Well.
Speaker 3 (02:38):
There's also the underlying theme of is this all hype, right?
Because you two get the sense sometimes that companies are
putting out press releases where they just mention AI to
tick a box, to be seen to be doing something
and hope that their stock actually goes up. And because
so much of this is proprietary and people kind of
have an excuse not to go into detail about it,
(03:01):
sometimes you do get the feeling that people are just
talking about it and not actually using it.
Speaker 2 (03:06):
Cynics and I'm not saying.
Speaker 3 (03:09):
This myself, I know you're not a cynic.
Speaker 2 (03:11):
Speaking of trading and technology. Cinics would say that comes
deal with Google to both of clouds, to you put
trading on the cloud with hype, that that was a
press release. People have said that people have made that
charge and they don't understand why. You don't have to comment.
You don't have to say anything further on that.
Speaker 3 (03:27):
I do have a comment, but I'll hold it for
our guess.
Speaker 2 (03:30):
I'm just there is this world where people do press
releases and cynics go. I don't really understand the point anyway.
There's a very long lind up. Let's learn more about
the world of trading. Let's learn more about AI and
tech specifically, what does it even mean to apply AI
within the realm of trading. We're going to be speaking
with Ian Dunning. He is the head of AI at
Hudson Rivert Trading. He's previously at deep Mind, so his
(03:52):
trading and AI bonafides are about as good.
Speaker 4 (03:55):
As it gets.
Speaker 3 (03:56):
With me, you've established them, we've established that.
Speaker 2 (03:58):
Really the perfect guest answer all our questions. So I
thank you so much for coming on the podcast.
Speaker 4 (04:03):
Yeah, I'm really happy to be here.
Speaker 5 (04:04):
I agree you as the mystique factor is kind of overblown,
even if it's understandable white people embrace it.
Speaker 2 (04:11):
Sometimes we're gonna blow past the mystiq. Let's start with
some like really just like rhudimentary questions, Like just the
first one is like Huns the River trading as a company,
how does it make money?
Speaker 5 (04:21):
Yeah, so we are a sort of quantitative automated proprietary
trading firm. Which is a lot of words, but I
guess the way I see it is we are a
service provider to markets. Okay, the most clear example is
market making. There is like a sort of utility to
the world of being really just buy yourself any product, anytime, anywhere,
(04:41):
and for us that means stocks, futures, options, crypto bonds.
And if you could say, build a magical machine to
quote a price to buy it or sell at any instrument,
and you would want to be like the best possible price,
like the tightest price.
Speaker 4 (04:56):
People would trade with you.
Speaker 5 (04:58):
They would be happy because there's a count for their
trade and they get a kind of good price, like
a low spread.
Speaker 4 (05:03):
And we're happy because.
Speaker 5 (05:04):
We essentially pick up a penny in front of a steamroller,
like we are making sort of money from that spread,
and we can pick up the pennies in front of
a steamroller if we have a really magical device which
tells us how what everything should be.
Speaker 2 (05:16):
When the steamroller is coming.
Speaker 4 (05:17):
Yeah, it tells us.
Speaker 5 (05:17):
When the steam roll is coming. And so I think
that's kind of the very very sophisticated sort of middleman
in some sense. And the same we've had Amazon is
Amazon doesn't make stuff, but it's a very valuable, profitable
company provides a service. People get value of same thing.
We're moving stock spons through time and space between different counterpartties.
Speaker 2 (05:34):
And yeah, we will.
Speaker 3 (05:36):
Ask you about the steamroller in a few minutes. But
before we do that, how does AI or the way
you're using AI actually differ from the algorithmic or quant
trading of olds, Because I guess that one of the
questions is, is this, you know, a sort of evolutionary change,
you know, maybe a marginal improvement on what already exists,
(05:56):
or is this something seismic and a step change the
big ship in the way trading actually works.
Speaker 5 (06:01):
Yeah, I mean I don't want to overstate ourselves in
some sense, because into space, as you mentioned before, it's
very like opaque what sort of different firms of this
class are doing. I can siddenly speak to our own experience,
which is we've been doing this type of trading for
twenty plus years, and much like everyone who was doing this,
the way it kind of worked was you handcraft features.
(06:22):
It's sort of based on human intuition.
Speaker 4 (06:24):
Oh, I don't know.
Speaker 5 (06:25):
If the order book looks imbalanced, there is more people
wanting to buy than sell, the price is going to
go up soon, or something like that. And maybe you
get a bunch of very smart people and they think
very hard, like it's almost like making a very fancy
watch kind of artistanally craft all these pieces, and then
maybe you use relatively simple mathematical techniques like linear regression
to combine those predictors. And I've been going to conferences
(06:46):
and things and recruiting for a long time, and even
if today's going the internet, you'll people say things like, oh,
that's all you can do in finance. For some reason,
they'll say this. They'll say something like, oh it's too noisy,
or markets are too nonstationary or things like this, and
so that's all you can do. And I guess that
belief isn't really backed up by anything in my opinion,
and like lived experience, I guess, And so we sort
(07:09):
of viewed it more for a long time as well,
everything that's happening in the world, and ideally you would
put this into kind of like a machine that does
not have any human biases.
Speaker 4 (07:19):
I don't know how to.
Speaker 5 (07:19):
Trade stocks myself, like I buy broad market ets, what
do I know? And so we but if you could
put all the data into a box and it kind
of could jurn all about data, it would find things
that you would never be able to do, this handcrafted thing.
And we started doing that very early, relatively in se
twenty fourteen twenty thirteen period, and over time, over less
(07:41):
a decade or so, much like in other contexts that
are not finance, there has been sort of a hockey
stick and you can measure by the sizes of the
models the compute deployed and over time that way of
modeling the markets initially was not like a hybrid with
the traditional way.
Speaker 4 (07:57):
Bially kind of just like overtook it.
Speaker 5 (07:59):
Entirely and so now our trading is entirely driven by
this magical machine consumes of a data. I kind of
keep saying this magical machine. It consumes of a data
for a reason, which is that this is how chat
GPT is trained, it consumes all the data all the Internet.
It's kind of scraped and connected into one place. When
you train a model, that kind of takes it all
(08:20):
and something emergent comes from it. And that's why I'm
kind of a bit of leading. But that's why I'm
talking about in the sense, and I think that is
materially different from the like I'm using my intuition of
the markets to kind of construct a predictive model.
Speaker 3 (08:31):
So just to be clear, how much of usefulness of
AI here is about execution and the fact that you
can crunch a lot of data really quickly with hundreds
or thousands of GPUs versus spotting sophisticated patterns or discrepancies
that you can exploit.
Speaker 4 (08:48):
I think it's both.
Speaker 5 (08:49):
I think one of the things that people sort of
missed with a whole like do a linear regression type
thing is when you really think about how much data
there is in financial markets generated. And when I say data,
I think it's important to think of it as every
event that happens in market. It's not the sort of
time serious of prices, but like the actual low level
substrate people are quoting trading retracting quotes that like low
(09:11):
level stuff is internet scale data set sizes, and one
of us sort of bitter lesson ye type things of
AI was like, you know, you shouldn't think too hard
about how to feature engineer this in pre process, that
you should kind of throttle in to something a form
of computation that can kind of make use of internet
scale data. In the twenty tens, it was like computer
vision people used to make detectors for edges of images
(09:34):
and things and they would combine them and same thing.
It's like that was a good approach, but you know,
it's completely dominated by the idea of getting very large
umber of GPUs and a kind of a pretty generic
neural network form empowering through it. As for like the
how is it finding things that other methods could not.
It's very hard to say our models are not very interpretable.
And I think that's fine because, as Joe mentioned, our
(09:57):
sort of trading style and holding times, a bit of
it is like minutes, hours, maybe like a low single
digit days for the most part, And I guess in
my mind it's unreasonable to expect them to be interpretable
because I don't know if I looked at the autobook
data for Tesla or something. Am I really going to
be able to tell you better than random with the
(10:17):
price of Tesla will be in a minute's time? And
so I kind of think it like that, if you
have something that's clearly superhuman already, what level of interpretability
because you expect like this is very different right to
normal AI.
Speaker 2 (10:28):
Right, this is gets into some areas that I'm very interested.
But just to like establish what we're talking about, you're
trading a stock like a Tesla in video, et cetera
with your magic machine machine. We had another episode where
that was the money box.
Speaker 3 (10:44):
That's a magic bo different, that's a different one.
Speaker 2 (10:46):
With this AI machine, it is sort of arguably grown, right,
and it's sort of grown in a lab more than
it is programmed, much like a chatbot. I know, it's
very different technology, like what is the price of in
video going to be tomorrow? Or what is the price
of ennvideo going to be this afternoon? What you're saying
is with your technology, you have a better chance of
(11:09):
getting that right, that you actually might be able to
make an informed prediction about the future in a way
that you couldn't have done, say ten years ago. Yes,
and that people who talked about this they would come
up with reasons. Oh, the stock market. It's not like
chess or go, and therefore you can't really do predictions
the same way. But what you're saying is that with
these models, which are different than lllums, there is some,
(11:31):
at least on a short time scale, predictive capacity.
Speaker 5 (11:33):
Yes, I think I find this still to mistake a
little bit hard to believe. I think you get this
kind of efficient market hypothesis stuff jumped into your head.
It seems if someone is saying they can predict like
the price of a stock in an hour, your instinctual
reaction is incredulity, Like just sounds like you're kind of
bluffing or making it up. But no, these models can
predict this, And I think it's the way to kind
(11:53):
of reconcile the like really man like kind of distinction
is that the predictions are very bad in some sense.
We don't no way to talk about like accuracy. But I
think the way to think about it is like the
accuracy is like fifty point one percent type thing, like
they're only a little bit better than random.
Speaker 3 (12:10):
But I suppose an extra one percent like blows up
your profits if you're doing it.
Speaker 5 (12:14):
Doing it scale doing it enough times and over time
you kind of realize the biased coin flip. And as
for why it might be possible to do this without
kind of invoking magic, It's like markets are very beautiful
interaction of like many different potties, all the different kind
of utilities and risk preferences and things, and the only
way you really see what people are doing is by
like the actions they take in markets, and you kind
(12:36):
of it's sucking up all that like signal, that micro
signal and extrapolating.
Speaker 2 (12:41):
The synicism or the skepticism about the possibility of machines
that could predict the price of stocks is a little strange, right,
because machines ingest data then whatever, maybe they see a
pattern more likely than not, this consolation of data means
tomorrow will be green. Humans do this all the time.
What else do we have besides data?
Speaker 4 (12:59):
Right?
Speaker 2 (13:00):
You have an analyst and they put out of Tesla
or whatever in video is going to go to five
hundred dollars a show?
Speaker 3 (13:05):
How dare you insinuate I'm not smarter than a computer show?
Speaker 2 (13:08):
We were like all humans have this data and much
less data, and yet humans are making predictions all the time.
They have a whole industry of it, so the idea
that therefore was for some reason a computer couldn't do
this with much more data analysts ever, have I understand
why the cynicism comes off as a little strength.
Speaker 3 (13:28):
I think some of the doubt stems from this idea
that a lot of these models tend to be backward looking, right,
and some of them occasionally are pretty bad at spotting
or reacting to big regime breaks. And I guess the
thinking again sometimes is that maybe humans are more flexible,
maybe more adaptive in their thinking, and they can kind
of spot these big cultural shifts. How do you actually,
(13:51):
I guess, prepare for those big pattern changes.
Speaker 5 (13:55):
Yeah, I was at HIT for COVID, and I thought
that was kind of like the most that was a pattern,
that was a big pattern break, and things went totally fine. Actually,
it was more of an engineering crisis in some ways.
Stock market volumes exploded and every system was just like
screaming trying to keep up with the volume of activity.
But in terms of the predictions they stayed quite good.
(14:16):
And I had like Rickoncilus in my head as well.
I guess it is a matter of like horizon and
like how far in the future are we talking intra day.
I think a lot of the price movement is driven
by just observing, like the flows. It's hard for us
as humans to observe, but it's like the relative patterns
of buyers and cells in the markets. And it's like, yes,
during COVID, volatility was massive and prices were moving up
(14:38):
and down a lot, but they were going up and
down during say March twenty twenty, and so these models
it was sort of out of domain for a human.
Speaker 4 (14:45):
But I don't think out of domain in some sense
for the models.
Speaker 5 (14:49):
But I guess I also don't know how you would
apply this thinking if you were trying to make it a.
Speaker 4 (14:53):
Month ahead predictions.
Speaker 5 (14:55):
I often get like people being like, oh, everyone knows
hedge funds, which were not a hedge fund is like
like flipping coins, and it's some survivor bias thing. And
you know, I genuinely don't know about months out prediction
stuff that is not a data rich environment.
Speaker 2 (15:10):
Just by definition, there have been more days than months,
so therefore prediction on a day basis you're offered a
lot more data that.
Speaker 5 (15:17):
The rule of thumb is basically very useful and it
extends all the way down to seconds, and we see
that empirically all the time. And so yeah, I guess
all the things I'm saying, do you have this heavy
out that it does rely on sudden of it being
a certain level of signals and noise. I definitely cannot
make reasonable claims about the price of things in like
a month using the same kind of like AI hammer.
(15:39):
I guess also to be specific, I'm talking a lot
about using market data to make these predictions. And that's
because on the sort of intra day timescale that is
the most important thing. It's all about flows and things
have been back and forth. If you're thinking about things
in a month's timescale, I think that's fundamentals.
Speaker 4 (15:55):
And can AI be used for that?
Speaker 5 (15:58):
I don't know, to be honest, and it's definitely outside
my wheelhouse. And I guess people have various opinions about that,
and maybe some people very much would like to claim
that they can, and you know, others maybe don't. But
it's definitely outside of my Arab expertise. And I don't know, Wait, talk.
Speaker 3 (16:13):
To us about the data that you're using, or talk more,
because this is another area where people tend to talk
in PR speak, sometimes we have access to all this data,
unusual data, alternative data, and that's going to enable us
to use AI better. What are you actually looking at
and what have you found? I guess most useful?
Speaker 5 (16:32):
Well, I think the thing that I found most counterintuitive
when I started was that when you're thinking about predicting
the prices of anything a minute, an hour out, I
far the most useful thing is just market data. This
is the market data feeds you can buy from the
exchanges for a pretty reasonable price. People often think of
some sort of like competitive moat. The data feeds for
these exchanges are not particularly high. I mean crypto, you
(16:54):
know where it's just like a wild West. But everyone
can collect these feeds, and so that is the most
useful raw ingredient. That is the most true expression of
everyone's intense right they're going to the market, that quoting,
the buying, selling, That is the primary ingredient. People get
kind of caught up on the whole, like do you
have a Twitter feed type of thing? And Bloomberg sells
a Twitter feed through a state of products, and it's
(17:15):
every now and then obviously something happens, news happens during
market hours that moves, the price justicates the price. But
if you really coldly rationalize that that is a relatively
infrequent thing compared to the overall massive markets. So I
thinking entry day, I think these market data feeds, it's
literally like a little events someone quoted that this price
and this size.
Speaker 4 (17:35):
It's all anonymous.
Speaker 5 (17:36):
Market data feeds are anonymous, and so that is the
raw stuff, and it is vast. There are just millions
and millions of events per day, per stock, per future.
When you get to the day days timescale, that's where
the alternative data quote unquote kind of really comes in
as an alternative to market data, the SEC filings, the
news feeds, balance sheets, brokers reports.
Speaker 4 (17:57):
Things like this.
Speaker 5 (17:58):
That's where that comes in, a vast sea of data
offerings that people try and sell that I think in
that kind of situation, you know, it's a very low
shop environment you start getting into, and it can be
hard to attribute the extra shop needs of these things.
But in some sense it's also very democratized where maybe
people collecting very secret data sets, but my inbox, and
(18:19):
I'm not even the person in charge of buying these
alternative data sets, is often full of people trying to
sell me the latest alternative data set, and I think
a lot of them don't necessarily have much predictive value,
but clearly as a market for.
Speaker 3 (18:31):
What's the craziest one you've seen?
Speaker 4 (18:33):
Huh can you remember?
Speaker 5 (18:34):
I mean, people have definitely reacted very strongly to the
Wall Street bets Era tried to kind of create a
bunch of reddity extracted thing and go beyond just like
raw captures of Reddit and trying to distill.
Speaker 4 (18:46):
It into something. It's just even just thinking about it.
Speaker 5 (18:50):
The meme stock thing is talked about more after it happens,
and it happens before, and so like.
Speaker 2 (18:54):
I don't know, it's sort of a sideways question. You
mentioned interpretability, and just give me something I've been wondering
(19:16):
about AI for a while, not even in the finance
realm specifically, you're a deep mind, which of course produced
a great GOT player better than the greatest grandmaster in
the world. I play chess. We know that chess engines
are much better than any human. On the other hand,
as far as I can tell, there is no good
(19:36):
AI chess tutor. So in other words, the chess crush
a doo. But like I've never been able to get
a thing where it's okay, you did this move, but
you know what, you're closing this rook file and down
the line because like it doesn't do that. The chess
dot com human talk is very rudimentary, et cetera. Can
you talk a little bit about why there are these
(19:57):
problems where some version of AI or machine learning or
whatever can do fantastically well, but then the actual explanation
of what it's doing, which I think is kind of
what interpretability is, can't articulate in a plane English why
it's able to do what it does.
Speaker 5 (20:16):
I think it's just because these neural networks are it's
just like a big old blob of numbers, and what
we're aiming to do in maturnit these models is to
almost like free ourselves from almost all structure, and they
might learn things in a way that is nothing at
all like how we learn things. And so my best
(20:36):
guess for like why it's hot is because they might
be reasoning in some sense internally, and people use these
words like reasoning.
Speaker 4 (20:43):
It kind of makes me win.
Speaker 5 (20:43):
So I've seen imagination and things used about neural networks.
I don't know if it's like kind of anthropomorphization of
them as kind of dangerous because they are essentially processing
things internally in this way that I think is inherently
not like how we do. And that is my best
sort of Yes, there are some interesting counter examples. One
of my favorite set of things in It Possible years
(21:04):
was Golden Gate Claude, which was anthropic made that the
model basically get very interested in the Golden gate Bridge.
Every question they asked would come back to the Golden
gate Bridge, and so they're not completely impenetrable, but it's
clear that like, I guess how I'd be on a
point to kind of map this back to how anyway,
Like we think it's very attempting to and exciting too,
and as especially for like AI safety applications, which is
(21:25):
not really relevant to me so much, but I think
it's for attempting to try.
Speaker 1 (21:29):
Yeah.
Speaker 2 (21:30):
No, it strikes me is that if you could solve
that many jobs, would you could actually make a lot
of productivity gains. But I do think that's an important
hurdle when you're training your models. So your models are
different than large language models, et cetera, but what they
have in common is there's a credible amount of data,
incredible amount of compute demand, how applicable if someone had
(21:50):
worked on lllms, would your training process be to them?
How interpret how could they move from that environment to yours?
Are there enough similarities in the base notions and compute
and requirements to train a model such as yours versus
people are doing it the major labs.
Speaker 5 (22:06):
I would say now in twenty twenty five, absolutely, But
I would not have said that in twenty twenty. And
this is something that kind of caught me by surprise
having done this for a while now, is that our
problems are kind of defined by long sequential strings of
information in some sense, and extrapolating from that. If I
think back to the pastive AI, it was like is
(22:27):
this is it a hot dog or not? It's kind
of like the like image classifier, you know, test. Then
there was some stuff with audio and things. I was
a little bit more familiar robotics, eh. But when we
got to the sort of LM error, it got very
interesting because suddenly the problems were very similar and that
you have you want to think back over like long histories,
(22:49):
long contexts.
Speaker 4 (22:49):
Okay, that sounds good.
Speaker 5 (22:51):
You've got a lot of data and you want to
turn through it in as efficient way as possible.
Speaker 4 (22:54):
You also have to serve this model.
Speaker 5 (22:56):
One has to run in like relatively reasonable speed, especially
for the LM. Is there a million people typing into
chat GBT dot com and they want to hear a
response in a relatively prompt manner. Of course for us also,
the models have to make their predictions in a prompt
manner of voice of predictions that I'm useful. So all
these things mean that our sort of way of thinking
about it has become very similar to the frontier LM things.
(23:19):
It's just a very different modality. We're operating on I
guess primarily text, and we're operating on this fileless interpretable
but still sequential stream of tokens, except our tokens are
market events. And so it's a lot of fun because
you know, in terms of like the research that is
still published, you can kind of look at it for
inspiration and draw our comparisons. But it's also very much
(23:39):
its own problem, which is kind of keeps me interested
every day because it's like its own unique thing.
Speaker 4 (23:45):
It's different.
Speaker 3 (23:46):
I want to go back to the point you made
about data and I guess democratizing finance in many ways,
and maybe this is a weird question. But I'm thinking
back to the twenty tens and we used to talk
about the big investment banks as flow monsters. They see
all these orders, they get all these orders, they see
all the flow, and that allows them to optimize on
(24:08):
funding costs and other expenses. Is the idea that data
and AI can kind of replicate that advantage so that everyone,
or not everyone but Hudson at least becomes its own
little flow monster.
Speaker 5 (24:22):
Yeah, I think there's still some trends and markets that
worry me a little bit in terms of I guess
our platonic ideal market structure is probably like everyone trades
on exchange in a centralized place, but that is not
really how things seem to be going. And there's a
huge amount of like off exchange dark quasi dark volume,
(24:43):
and I think there's still a lot of qunits of
the trading world where like being in the room is
kind of like this big advantage. And this is a
very much anti AI play in some senses. Data is
hidden the data, the flow data is hidden it and
it's not something that you can feed into a machine.
Speaker 4 (25:00):
This very spas amounts of.
Speaker 5 (25:01):
It, So that's kind of an interesting trend a lot
of us did get sales get reported in a centralized
place later, but it's not prompt enough to be useful,
and so to the AI thrives on data, this is
in some sense like an issue for the long run,
you need to kind of be in the rooms where
the sort of trading is happening.
Speaker 2 (25:20):
I'm glad you brought that up, because that's specifically what
I'm curious about from the sort of physical infrastructure side,
Like if I have a queria to chat GPT, I
don't care if the model is like trained in like Eblene,
Texas or wherever it gets back to me and whatever.
But I know that for high frequency trading, at least
(25:42):
on the execution side, there are certain parts that you
want to be literally co located, and you want to
have the shortest possible wire, and however short it is,
ideally you would like it to be shorter. Can you
talk about the differences and similarities between essentially your physical
hardware stack verse is what would be required at a
(26:02):
large language model frontier lab.
Speaker 4 (26:04):
Yeah.
Speaker 5 (26:05):
I think at a bulk level there was actually some
pretty similar things. So I can think about it as
like latency and throughput latency being the time to react
and then throughput kind of like how much thinking you
can do in a certain period of time. And so
you're right that like this space demands like low latency.
Early in the twenty ten, so it was a sort
of Flashboys book and perception where it was like really
(26:25):
kind of about arbitraging latency. I'm happy to report that
in some sense all the latency has been arbitraged.
Speaker 2 (26:32):
For the most part, there's no more engine shortening.
Speaker 5 (26:34):
The wire is probably like a little bit, but it's
relatively small, and like I think if you look at
the big quant trading firms that need to like really
make the wires as short as they possibly can is
done or are no longer relevant, which is great because
I find out stuff pretty boring. Personally, I think about
it more as like, for a given kind of like
speed of response, you should be the smartest person. So
(26:58):
it feels like this curve, if you're going to take
a second to come up with your trading decision, it'd
be a really really good decision. And then it doesn't
kind of matter that it took a second. And if
you're going to take a microsecond, well a you probably
can't do too much in a microsecond, but you know
it'll still be the best response in a microsecond, and
so you.
Speaker 2 (27:14):
Could be a little worse. You can be a little
worse than the second.
Speaker 4 (27:17):
Yeah, for sure.
Speaker 5 (27:18):
And so essentially for our training, we use the cloud.
We have our own training data centers that we've built ourselves.
That is basically the same, although much much smaller scale
the scale of Googles and sayings. I don't know, it
blows my mind the spending on stuff like this. We are,
I think big if you're not comparing us to Google
a meta, but that's not like bajillions of dollars. So
(27:41):
training is kind of the same inference. We need to
put a devices close to the exchanges, and we need
to think very hot about the power usage and the latency.
But we have hardware teams, We make our own FPGAs,
we make our own chips, and we use off the
shelf GPUs, And what we try and do is we
try and make sure of it for any given set
of speed or response, making the smartest possible decision you can.
Speaker 2 (28:02):
So you can kind of field programmabile gate rate. Oh yeah, sorry, Yeah.
Speaker 5 (28:07):
Basically, all these different devices have different latencies and through puts.
GPUs have very high through puts. They are that's what
they're useful for, right and so, but the problem with
markets is they're kind of like narrow. The amount of
traffic flowing into these like lms from everyone typing into
their redbos it's massive, and they do also some clever
things kind of batch up requests and process and things.
We don't really have that luxury really, like the markets
(28:29):
are going to happen at the speed they happen. We
can't kind of like duck out for a while and
catch up. We kind of need to stay in the game.
So we have always sort of interesting design challenges around
how do we use GPUs which are relatively high latency.
Speaker 4 (28:39):
They take a while to give.
Speaker 5 (28:40):
Back a result, but they can process the whole stock
market on one GPU type of thing versus the fast response,
And so we have whole teams dedicated to thinking about, Okay,
I've got this like intelligent blob, how do I get
ounces out of it in different ways at different speeds?
And that I think is where a lot of smarts
are going in this world these days, rather than like
(29:02):
how do I make sure my microwave towers are like
slightly better aligned somewhere in rural Pennsylvania, which is a
cool challenge.
Speaker 4 (29:09):
It's own right, but it's done.
Speaker 3 (29:10):
I think.
Speaker 5 (29:11):
I think people have found the straightest line from New
Jersey to Chicago.
Speaker 3 (29:30):
Joe brought up some of the cynicism around CME's cloud
deal with Google, and this came up speaking of a
specific cynic who went on the record in one of
our episodes. Don Wilson basically made the argument that matching
on a cloud doesn't necessarily make sense because you might
put into orders and you're not really sure which order
gets filled first. I guess you're kind of back in
(29:51):
that black box environment, or maybe it's a latency issue.
I don't know, is that a problem that you're seeing.
Speaker 5 (29:57):
That's something that I worry about. A general philosophy is
MA should be very like transparent and as fair as possible,
So like equalizing access is a good thing in terms
of participants. Shouldn't be at to like basically pull weird
tricks to be faster. On the other hand, I think
you want reliability, so like this concept of like orders
arriving at different times and being filled in different orders
just doesn't seem like a very sensible way to run
(30:18):
a market. It's something that requires a lot of effort
to engineer around, and it's just a good market design
to have. It is a very widespread though, in existing
exchanges across the world. We've traded in like a vast
number of countries, and some of the exchanges have such
amazing hardware that like, if two orders are sent within
like a nanosecond of each other, this exchange will never
(30:39):
process them in the wrong order, even if it's one
hundred different network ports and they're all connected. They have
this amazing time stamping stuff. On the other hand, you
might have like a crypto exchange where it kind of
feels like a kid learned JavaScript and ran set up
a website and you're kind of like you send an
order and you may may not be confirmed that they
even received it, and then you kind of have to
(31:00):
refresh your like account balance page like five.
Speaker 4 (31:02):
Minutes later to see if there's many in it or not.
Speaker 5 (31:04):
And we kind of we'll take we'll deal with it
as it is, but certainly we have a preference for
kind of equalized access but sort of predictable outcomes, and
I think that kind of leads to like people spending
efforts I think it's not astly very great thing for
society for people to be like stressing very hot about
why it lam.
Speaker 2 (31:21):
Yeah, no, probably, I'm glad. I'm glad that you report
that we've moved on a little bit since then. Where
are your constraints? You know, when you talk to LLLM people,
there's debates about right now, is it electricity? Is that
the big constraint? Is it there just aret enough GPUs?
Is it talent? Is it whatever? When you think about
where you are now versus the optimal version of where
(31:42):
or is it I mean, data is the other big
one because there's all this concern that lllms are going
to run out of training data, et cetera. Where is
the big constraint for you that you feel like you're
solving for right now?
Speaker 5 (31:52):
I think in terms of like really long term strategic planning,
electricity is like quite clearly a very binding consideration. When
we think about spitting up new GPU based training data centers,
it really feels like, is there electricity? Like finding a
piece of land to put a building in. There's a
lot of land.
Speaker 4 (32:11):
Yeah, the electricity negotiation.
Speaker 2 (32:13):
That's an issue at agr T even for us.
Speaker 5 (32:17):
You know, because we have a sort of hybrid mix
of using cloud providers and building our own data centers,
and yeah, the negotiations and thinking about power constraints. We
have an existing data center in a very cold place
and we want to make it bigger. And the data
center people are fantastic to work with, but they're saying like, well,
we need to go talk to like the power grid
(32:37):
and negotiate this next trunch and so on, and it's
just it often feels like that is the bottleneck. And
on the terms of a GPU availability, it definitely was
a crunch at some point in the past, but I
don't feel like that.
Speaker 2 (32:48):
Is a little more the entire stock market. Say a
little bit more about how you perceive the GPU market.
Speaker 4 (32:54):
Right, I think.
Speaker 5 (32:55):
I think if we ask for GPUs, we will get
them to live in a prompt manner, not necessar early
like next day. But I don't feel like that is
the thing that we have a long pull and spinning
up more.
Speaker 2 (33:05):
When was the when was the worst of the crunch?
Speaker 5 (33:08):
I guess twenty twenty three, late twenty twenty three felt
pretty bad.
Speaker 4 (33:13):
I was.
Speaker 5 (33:13):
I guess that was like the Nvidia Hopper generation, and
I saw also a number in Ploomberg yesterday that I
think it was like Nvidia conference yesterday and I said
something like it was like one million Hopper class GPUs
have been made, but already like four million Blackwell class
gp has been made. So I think there's been a
ramp up of supply. But I don't think they're also
sitting on unsold inventory either. I think it is being consumed.
(33:36):
But yeah, in terms of like what is the hod thing,
I think electricity, And I am it's insane. As a
very millennial person, I guess climate change was a big
thing growing up in college, but a lot of discussion
about climate change, and to see people spinning up data
centers very fast by basically buying as many gas turbines
as they can and putting them outside, I'm like, WHOA, Like, yeah,
(33:58):
what are we doing? It's wild, but this the only
way to get electricity promptly. You just have to throw
guests turbines outside the building and turn them on. It's
pretty radical stuff.
Speaker 4 (34:06):
And I don't know how all the numbers.
Speaker 5 (34:09):
Of people talking about for future data center expansion kind
of math out because you just back of the envelope
the power usage and things.
Speaker 4 (34:16):
And I know that the sam Oltman's of the world.
I've thought about this, We've talked about this.
Speaker 5 (34:19):
So oh, we need to be generating this much new
power generation per you of time, but there's such daunting numbers.
I just don't know how that is all going to
work out. But yeah, even for us in the grand
scheme of things, like a much smaller player in terms
of power consumption, we think in terms of like tens
of megawatts, not gigawatts, which is more than most towns
(34:40):
and cities and things.
Speaker 4 (34:41):
But still, but we.
Speaker 5 (34:42):
Find it like a challenge to find electricity at a
reasonable price.
Speaker 3 (34:46):
On this note, can you talk to us a little
bit more about where competitive advantage actually comes from in
this space, because if the GPU crunch is somewhat solved,
and if latency isn't as big an issue as it
used to be, where are people actually getting their edge from?
Speaker 4 (35:01):
Right? I mean people? Talent is one of your other things.
Speaker 5 (35:04):
You asked that a constraint it is It is a
very competitive people market. We're essentially asking for people to
know a lot of things, be both good researchers and
good engineers. Because I don't know in this AI era
of a distinction is pretty blurry. It's not something you
can just wipeboard and then the coding is a little
bit afterwards. Any kind of research idea you have is
(35:26):
intimately connected with how you implement it. So that's already
like a tough ask. So people are constrained. People that
we like I want to find and we pay well
for those people as a result, and it is competitive.
But I think the more subtle edge is almost like
putting it all together. Do you have people who can,
like an engineering team that can collect double data recorded,
(35:48):
make it available to the GPU training data center. This
is like many I guess it's petabyte scale data sets
and just stroying too much data, streaming it from wherever
a stored to wherever in the world the training data
center is reliably these training runs are very expensive and
then once you've got that model serving it, so it
(36:09):
kind of sounds to everything and maybe that's kind of
like a lame concert, but it really is. I think
you need to be just optimizing the whole stack. And
so like my team is like the AI team, so
net what that really means in practice is we're focused
on training the models, which is an important but not
sufficient part of a whole stack, because we would be
kind of dead in the water without the teams at
(36:30):
HIT who think about how to actually kind of get
the data and things TV systems and then the decisions
out to the markets and keep up when things get busy,
all these things. So when I think about our competitors,
I think there is a benefit to scale. I can't
imagine how you would start a new company like HIT
in the year twenty twenty five because of the huge
(36:52):
initial lift to kind of build enough engineering scale to
achieve this sort of thing. And so I think are
sort of peer companies also have invested very heavily in engineering,
and we'll continue to do so. And there was an
article in the FT like a little like a week
or two ago about how firms like HIT are kind
of extending themselves more into slower trading and there are
(37:13):
firms that are kind of you know, those slower firms
that's trying to kind of go faster.
Speaker 2 (37:17):
And yeah, I was just gonna ask about, just like
on the prediction standpoint, Okay, maybe you could predict what's
with some reasonable confidence what's gonna happen in the next hour. Sometimes,
if you're lucky, maybe a day like maybe a month.
It's just ridiculous. But do you in your work is
that horizon? Has it broadened?
Speaker 4 (37:34):
It is? Yeah.
Speaker 5 (37:35):
I think one of the things for people who are
aware of HIT even at all, I think is still
a perception is sort of a pre twenty twenty perception
of like we are purely high frequency trading firm, but
we would say we are both high frequency and medium
frequency trading firm, and it's like a big part of
our business. One way to think about it, I think
is that by if I really have a view on
what a stock should be in like five days time,
(37:56):
Let's say I want to buy that stock, I'm going
to acquire that stock over time, and maybe it's what's
the best time to buy that stuck over the five
day period, Well, I have a model that tells me
that the best pricing an hour. So maybe the shorter
term model should inform the longer term trade and cascading
all the way down.
Speaker 2 (38:12):
When you're doing this sort of slightly longer term or
slightly slower frequency trading, is the fundamental job still the same,
which is you're in the liquidity provision service business, just
of or longer you want to hold that warehousing or
does it some because when I think of a fund,
when I think of a hedge fund, I certainly don't
think of maybe to some extent, some of their strategies
(38:34):
might be sort of liquidity provision. It's more directional. Is
it still that or is the fundamental reason why you
make money the service you provide? Does it change by definition?
Change over that horizon?
Speaker 5 (38:44):
I think the market making service provision does break down.
I think that stretches analogy too far. I think you
have to think of it as like liquidity taking, which
somehow seems more like aggressive or something. But the we're
trading against orders resting on the book. Someone was like,
I want to sell this dock, and we're like, we
will buy it from you because we think that in
the long run will be worth doing it, and so
we do cross to spread and we do pay this
(39:07):
transaction costs. Sometimes, you know, you can also kind of
acquire position by market making, but with a tilt. So really,
at the longer horizons, I think the sort of market
making service analogy does break down. But in some sense
there's always a counterpartty and they wanted to trade for
a reason, and I think a mental model that I
don't know. You tell me if this sounds like too
touishing rushy.
Speaker 2 (39:26):
But love a mental model.
Speaker 5 (39:28):
Yeah, you mentioned go chess, right, So the thing about
those is that there there's zero sum games is only
one winner. It's truly like a no, like someone someone's unhappy.
Someone was in there maybe equally unhappy plus one minus one.
I think the reason that trading works is because it
is in some sense positive sum. You know, money is conserved,
and I guess the little fee goes to exchange, So
(39:50):
in some sense money is at that moment of a
trade is actually negative a little. But utility people's general
happiness I don't know, might paycheck go into my fur
one k provider and it bias some ETFs. I'm relatively
like insensitive to how exactly that happens. I just I'm
not gonna look at it for never forty years, right, No.
Speaker 4 (40:10):
I try not to look at it, especially lately.
Speaker 5 (40:12):
But uh yeah, like the utility, My utility is a
very long horizon, and so someone sells it to me
like at one cent different, I don't really care. So
but like the person who made the sense happy and
I'm happy because I got good liquidity didn't cross a
huge spread. So that is kind of why I think
it all kind of makes sense and white people are
trading together. But it's also why like thinking about markets
like an alpha go sense doesn't make sense because it's
(40:34):
kind of doesn't really apply. If you thought of markets
as hit and all our competitives all kind of in
some sort of like deathmatch, who's the smartest, who's trying
to pick each other off, then markets would be kind
of like this giant standof where no one would be trading.
Everyone be kind to be like waiting. But obviously markets
are very vibrant. I think it's because even when we
were crossing the spread, because a crossing is prettingaintone who
(40:54):
wanted to sell for whatever reason. If we're right, I
guess in five days time they might be like less happy, but.
Speaker 4 (41:00):
Maybe they weren't. Actually, maybe they were just like hedging
a position. They don't care what.
Speaker 5 (41:03):
The stocks prices in five days. They just wanted to
like hedge their position, and we traded with them. So
that's the way I tell rick and styles in my head.
But it can still be like a sort of service
provision we make mindly only because someone else wants to trade.
Speaker 4 (41:16):
If no one was trading, we wouldn't exist, right.
Speaker 3 (41:18):
And different market participants with different motivations and goals and aims.
I want to go back to the talent question. Yeah
for a second, and I get the sense that engineers
like open source and they like contributing to the research
ecosystem on AI. And then I get the sense that
trading firms probably do not like open source, and they're
much more into protecting their proprietary models or data or whatever.
(41:43):
How does a company like HRT, how do you actually
balance that tension?
Speaker 4 (41:47):
Yeah?
Speaker 5 (41:47):
I mean this is also like a sort of really
honest answer in that many years ago. This is a
relative comparative disadvantage for us for recruiting some We often
have conversations with, maybe especially PhDs who are graduating, and
they would like, well, I can go to Google and
I can still publish my research, and that kind of
gives me optionality.
Speaker 4 (42:04):
People will know who I am.
Speaker 5 (42:06):
If I go into an HRT or like firm, I
essentially go behind this veil and I never emerge and
people just had to kind of take it on faith.
I did smart things for many years, and I would
have basically no strong counter argument apart from the fact
that actually writing papers is kind of overrated. I've been there,
done that, as when you get older you will not care.
Now though, there's this interesting situation where this golden era
(42:30):
may be of like being able to be work at
a big tech company and be paid for public research
is very much over The papers that do come out
of the big AI labs are essentially kind of either
a very stale or not important, and if you're working
on the most important cutting edge things, you can't share
what you're doing and it's very secretive. So some since
the problem solved itself a little bit for me, and
(42:50):
people now recognize that IP should be protected. I've even
seen some of us sort of AI lab people think
out a lot about non competes in public thinking tweeting
about non competes.
Speaker 4 (43:00):
And things, which is an amazing ton of events, because
I feel like.
Speaker 2 (43:03):
That was very anesthetical.
Speaker 5 (43:05):
I mean, they're like literally effectively banned in the state
of California, and I think people were almost like proud
of this fact, and which also kind of hold it
against the New York sort of trading world, being like, oh,
look at these people, are there non competes and things,
And then someone comes along and pays one hundred million
dollars or whatever for like your researchers, and a lot
of that money is being paid for talent, but it's
(43:26):
also in some sense paying for intellectual property.
Speaker 4 (43:30):
And like, those people.
Speaker 5 (43:31):
Know how the soup is made and they are not
writing it down and not committing any explicit sort of
IP theft. But if you hire five people who've been
making the soup, they know they know a lot of
process knowledge, and you might suddenly feel a little differently
about protecting that. We spend a lot of time training
our employees. Takes a long time for them to be productive.
(43:54):
In some sense, it would be a shame if people
could just take that knowledge and immediately leave.
Speaker 4 (43:58):
And so, yeah, just.
Speaker 3 (44:02):
Going back to the steamroller. I promised, I promised we would.
When I hear AI in trading or I know people
are very excited about agent based AI nowadays, part of
me thinks back to one of the more amusing events
in financial history, which is Joe. I'm sure you remember
at the time that one of night Capital's algos.
Speaker 2 (44:23):
Would not find that to be an amusing Yeah, did
all the worst Nightmare possible, but using for them the
peanut gallery, right.
Speaker 3 (44:29):
Right, schadenfreud. So this algo went rogue and bought like
seven billion dollars worth of stuff? Yeah, exactly, what are
the guardrails that you put in place to avoid the
destiny of night capital.
Speaker 5 (44:42):
So every training cycle we have a talk about the
nightmare with a K and we have multiple x the
night employees at HIT, as you might expect just from
the lineage of a successful trading firm that ended in
a kind of unhappy way, and we have many people
who were at night.
Speaker 2 (44:57):
This story is crazy and successful trading firm that ended
about fifteen yeah.
Speaker 5 (45:00):
Yeah, So it's fair to say that that stuff haunts us,
and we try and take as many lessons away from
that as possible. Defense and layers. So I think one
of the things that I'd like to emphasize with the
AI stuff in particular is that it is not like
there's some neural network directly sending orders to NIZ. It
is in some sense providing a plan and then traditional human,
(45:23):
heavily audited, risk checked layers take the actions and that's
just kind of how it has to be. And so
for us we are kind of on an operational day
to day basis. It's just many, many layers of sanity
checking throughout the day, and then at a sort of
high level it's very careful process including processes to specifically
avoid the KCG type scenario of how are you even
(45:46):
releasing new versions and what pre released checks do you run?
And audits and we even during the day we have
some I don't know, I guess you'd call them like
sanity checks of the neural networks to make sure that
they are producing the values that we expected they would
be reducing. And those sort of checking processes are kind
of a little bit behind because they can't keep up
with the like flow, but like for enough to kind
(46:06):
of just again like every tex of a numeric stability
of the model saying and things. It's not it's not
about losing money or making money in today, because it's
not like, oh, like risk in the kind of financial sense.
It's like operational risk. But paranoia is deep and that's
probably something that's still very different I think, from this
market from the sort of other AI world, which I
(46:27):
guess anything goes and like failure rates to kind of
just priced in. Yeah, but yeah, you could you could
imagine just ruining everything, and I guess we worry about
losing money, but I think we worry more about taking
an action that a regulator would not want us to do,
because if you lose that trust of regulators, you lose
it for a very long time. And we trade in
a lot of markets and we pay very close attention,
(46:49):
and I have deep respect for the regulators and their
decisions and all those markets and the rules are sometimes
very complex, and man, do we watch that stuff like
a hawk, because you know, you don't be kicked out
of a country for making an operational error. And this
is a very low tolerance culture from regulators in terms
of making mistakes. So we stress it a lot, and
I think we should because it's it's like the profit
you make in ten years by still being in the
(47:11):
game versus move fast and break things. It's not move
fast and break things, PA still want to move fast.
Speaker 2 (47:16):
I have like a million more questions, but for the
sake of time, I'll just ask one more. And I
don't know even know whether it's something you're in position
great position to answer about. It's something I actually want
to do an entire episode about at some point. But
as you would characterize it, what happens in the second
after a jobs report is released. And what I'm talking
(47:36):
about specifically is numbers either flash on the screen or
a piece of a text appears on a website, and
markets move around a lot all that and there's people.
Then suddenly it's actually the jobs report was good, and
if you actually look at the wage number and then
the six But in that instant, in that first micro
second after the release, markets are already moving, certainly before
any human has had a chance to read the thing
(47:59):
or for view. So what I assume is that there's
training on here's the text and here are the things
and whatever. But to as you would put it, or
from the perspective of hr T, what happens in the
millisecond after an event?
Speaker 5 (48:13):
Yeah, so yeah, I mean, so we have like a
Bloomberg headlines feed that it's like pretty low latency, and
if it's like an important articleize like a star and
a feed things like this, right, But you can do
everything from having kind of a hand crafted logic to
look for keywords through to putting it through like an
AI model. One of the things that I like still
(48:34):
kind of kind of wrap my head around is, I guess,
without saying specific company names, there are options trading firms
that have thousands of people that are essentially cyborg trading options.
They have maybe ten people trading like options for a
single big stock like in VIDEOSA, and they are humans
staring at the feeds for these things and clicking buttons,
(48:57):
and they have user interfaces that will sit up for
them to hit the green button.
Speaker 4 (49:00):
Of the red button. Essentially very fast. It's weird. We
actually want for a hackathon.
Speaker 5 (49:06):
We got a PlayStation controller and kind of gave people
a chance to try and practice reacting to events. It's
really tough, but it's a learnable skill. I think in
an efficient market sense, this should be ai able. It
is challenging though, because if you imagine to kind of
plumbing it into chet GBT, it would be too slow,
Like the latency would probably be sufficiently high.
Speaker 4 (49:27):
I mean it's not that fast, right.
Speaker 5 (49:29):
It's fast for any normal day to day thing, but
for markets it's kind of slow also. And this is
like a very interesting research challenge. Is like you can't
literally use chet GBT to back test anything. It knows
every jerme, heal speech, and knows what happened afterwards because
it's trained on the whole internet. So how do you
really get confidence that for the next federals of speech
(49:50):
it's going to do the right thing. Traditionally in finance
you back to us things to see how you're done
in the past. But if in this case it's all
could of ensemble, like it's seen it all before. And
I've seen academic finance papers if they try and like
grapple of this and they say it's still works. They
try and account for this, but I know, just this
stuff is really that smart. Yeah, The whole kind of
thesis is that it's memorized, everything has been trained on,
(50:13):
so why would it be reliable?
Speaker 4 (50:15):
And so when if you see someone being like.
Speaker 5 (50:16):
Oh I ran every federal reserves speech through giant GBT
and it got it right like nine out of ten times,
it's like only nine out of ten times, Like why
not one hundred percent? So I do find that I
do think that it is interesting there are how many
humans that's still involved on relatively high speed trading. There
are a lot of people still doing this and instead
(50:37):
of niche products. And it's presumably because it's very hard
to integrate all the information. It's AGI twenty I don't
know twenty twenty eight twenty thirty. I don't know, there's
still a lot of humans trading stock and options and
so like, I don't know how to reconcile that, but
I think about that.
Speaker 4 (50:50):
When I read fun being.
Speaker 2 (50:53):
Dunning, it was a fantastic There really are like hours
more of conversation, so we can back next week next
week's episode. But no, that was great, thank you for
having really appreciate it.
Speaker 6 (51:03):
Yeah, pleasure, Thank you, Tracy.
Speaker 2 (51:17):
I thought that was really great. I like this idea,
this sort of anti symicism, because you do hear a
lot of people say, oh no, like AI could solve
things like chess or whatever, but the stock market is
fundamentally different, and I've never been totally satisfied with some
of the theories for why. And like I get stocks
are not like necessarily like a solvable problem in quite
the same way. But humans make money on the market
(51:40):
by matching patterns. Why can't smart silicon brains do the
same thing.
Speaker 3 (51:45):
Well, there's also history. Now we have many years of
HFT trading and yeah, gruthically driven trading where people have
made a lot of money, So it seems to be working.
The light bulb moment for me was where Ian talked
about the timeframe and the importance of the time frame,
and I think that's really the key in many ways.
It's adapting what you're doing with AI to the data
(52:07):
that's available and the data on markets. Most of it
is going to be very short term and more seconds
and minutes, more minutes than days, et cetera, et cetera.
And a lot of the data is also biased to
immediacy versus past analysis, which he spoke about as well.
Speaker 2 (52:25):
It is always funny in finance people. It's like, oh,
seventeen out of nineteen times there's been this death cross
of the S and P five hundred stocks went down.
It's like any serious data scientists will spit at that sampoint.
It's like beyond a joke level to talk about a
sample size of nineteen.
Speaker 3 (52:43):
Yeah, but death cross in a headline. It's so tempted.
Speaker 2 (52:48):
That's true. You all, you cannot advice to journalists. Never
pass up a chance to put death cross. I was
glad to hear thought a few things interesting. One is
I was glad to hear that the wire length problem
is no longer. Yeah, it's not just as racing it
closer to the extreme.
Speaker 3 (53:02):
I was kind of boring when people were talking about
the Cold War and HFT and all of that.
Speaker 2 (53:06):
It's interesting that the GPU market is eased versus where
it may have been a couple of years ago. And
it's interesting they even at a scale a good trading shop,
that electricity is proving to be a main constraint, which
does raise questions about are we just going to hit
up against a wall given some of the AI plans
that so many people are banking on for the chatbots.
Speaker 3 (53:28):
Yeah, I thought also, I guess the cultural shift in
some of the last Yeah, it was really interesting this
idea that they've become more proprietary and perhaps more mysterious
in some ways, rather than the trading firms becoming more open. Yeah.
Speaker 2 (53:43):
Lots of great conversation, answer some questions. Yea plenty more.
Speaker 3 (53:48):
That was helpful, and I'm sure we'll talk to him again,
maybe not next week, but soon next year. All right,
shall we leave it there, Let's leave it there. This
has been another episode of the All Thoughts podcast. I'm
Tracy Alloway. You can follow me at Tracy Alloway.
Speaker 2 (54:01):
And I'm joll Wisenthal. You can follow me at The Stalwart.
Follow our guest Ian Dunning. He's at Ian Dunning. Follow
our producers Carmen Rodriguez at Carmen Arman, dash Ol Bennett
at dashbod and kill Brooks at Kilbrooks. More odd Laws content,
Go to Bloomberg dot com slash odd Lots with the
daily newsletter and all of our episodes, and you can
chat about all of these topics twenty four seven in
our discord Discord dot gg slash odlines.
Speaker 3 (54:24):
And if you enjoy odd Lots, if you like it
when we dive into how companies are actually using AI,
then please leave us a positive review on your favorite
podcast platform. And remember, if you are a Bloomberg subscriber,
you can listen to all of our episodes absolutely ad free.
All you need to do is find the Bloomberg channel
on Apple Podcasts and follow the instructions there. Thanks for
(54:44):
listening
Speaker 4 (55:02):
In