All Episodes

May 15, 2024 31 mins

AI technologies have sparked widespread curiosity and adoption across many industries, encouraging professionals to explore the practical applications of AI in their daily tasks. This is no different in the finance industry, where experts have been experimenting with the transformative potential of integrating AI into standard tasks. 

That is what startup, Finpilot is doing. Described as ChatGPT for financial questions, Finpilot uses AI to pull information out of unstructured financial data. The co-founder and CEO, Lakshay Chauhan, joins Jim Jockle to discuss this technology, its implications and its future. 

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:07):
Welcome to Trading Tomorrow Navigating Trends in
Capital Markets the podcastwhere we deep dive into the
technologies reshaping the worldof capital markets.
I'm your host, jim Jockle, aveteran of the finance industry
with a passion for thecomplexities of financial
technologies and market trends.
Because this is TradingTomorrow navigating trends in
capital markets where the futureof capital markets unfolds.

(00:31):
Over the past year, the rise ofAI technologies such as Chak,
gpt and Copilot AI has sparkedwidespread curiosity and
adoption across many industries,encouraging professionals to
explore the practicalapplications of AI in their

(00:53):
daily tasks In finance.
Discussions about thetransformative potential of
integrating AI has never beenmore popular.
Until recently, it was justtalk, but now this interesting
frontier of innovation israpidly becoming a reality, and
today we're thrilled to welcomea guest whose groundbreaking
startup is helping.

(01:13):
Joining us today is LakshayChauhan, the co-founder and CEO
of FinPilot, which is beingcalled chat GPT for financial
questions Currently available inpublic data.
Finpilot uses AI to pullinformation out of unstructured
financial data, for example,data found in SEC documents.
Along with his co-founder, johnAlberg, lakshay's company

(01:35):
received $4 million in seedfinancing led by Madrona, with
participation from Ascend VC andAngels from leading hedge funds
.
Lakshay is a longtime machinelearning engineer in Seattle for
the hedge fund industry.
Lakshay, thank you so much forjoining us today.
I mean, it's a fascinatingproduct and I'm really excited

(01:55):
to dig into this a bit furtherwith you.
So perhaps just to kick us off,where did you come up with the
idea for FinPilot?

Speaker 2 (02:02):
Thanks, jim, happy to be here.
It's interesting.
So before starting FinPilot, Iwas at a hedge fund and I was a
head of machine learning there.
So I spent a lot of timebuilding ML models for investing
purposes, right, and so I wasreally deep into financial data
and trying to, you know, buildprediction models with deep

(02:22):
learning.
And so, you know, over thethree, four or five year period
I kind of, like you, had minedall the quantitative data we
could for the fund that I wasworking at.
And so during that course of thetime, like after, you know,
mining all the quantitative data, we were looking at
unstructured qualitative data.
So, you know, we knoweverything about the financials,

(02:44):
unstructured qualitative data.
So we know everything about thefinancials.
We know everything aboutmomentum data, whatever we could
get our hands on.
But then there's this thingabout the quality aspect that
matters a lot in investing andthat we weren't really capturing
at the time.
So that's where I started todig into.
It's like what can we do inunderstanding this unstructured
data, this textual data?
And so we started digging intothese sources like filings and

(03:09):
transcripts and market researchreports.
And when I was looking intothese, transformer models had
come along.
It had been a few years and Iwas playing around with them and
so this is way before ChatGPTor GPT-3.
I think GPT-2 was out at thetime.
But the fact, when I was playingaround with these, that these
models could understand languageso well was very surprising to

(03:31):
me, and that was like whoa.
Actually, the fact that thesemodels are good at understanding
long, you know, text and likelogic and reasoning could be
more interesting for the humanside of things, or the analysts
themselves, because they are theones reading these long
documents and computers areprocessing data, which you were

(03:51):
already doing.
So I think that was sort oflike okay, can we do something
with like for the humans,because there's just so many
analysts there?
And that to me seemed a verycompelling opportunity, given
that, as an all knowledge worker, spend a lot of time right
reading and synthesizinginformation and given that I

(04:12):
could see, you know, ai isgetting there to understand
these sort of documents.
And that was really, like youknow, a starting point and we
started talking to people andkept getting more and more
signal what it could look like.
But that was really the genesisof why it made sense to do it,
because just these models got sobetter at a point where they

(04:34):
could understand thisinformation more generally.
You didn't have to program themright, you didn't have to
program.
This is how you extract datafrom an earnings call or
understand sentiments like thisor whatever.
Like it was very generalpurpose and to me the
applications was like, okay, thehuman productivity side of
things could be really, reallyfascinating.

(04:54):
So that, yeah, that was it.

Speaker 1 (04:57):
So you know you mentioned unstructured data and
that means a lot to a lot ofpeople.
You know were you straf?
Were you strafing securitycameras of people walking into
stores?
Maybe give us a little bit morecontext around that.

Speaker 2 (05:13):
Yeah, no, that's a very good question.
So typically in the financialworld or the quantum world,
quantitative data is justnumerical data, like tick data,
credit card data, like all thenumbers right, and unstructured
data is typically numerical data, like tick data, credit card
data, like all the numbers Right, and unstructured data is
typically they can be numbers,but it's basically something
that has not been processed andis more raw form.

(05:34):
So it could be security camfootage.
You know you've heard hedgefunds looking at you know
parking lot images of Walmartand trying to figure out, OK,
what's the traffic?
Like parking lot images ofWalmart and trying to figure out
, okay, what's the traffic like.
So that would be categorized asunstructured data.
What I'm specifically meaningas more in terms of like textual
data, so PDF reports, right,SEC filings, transcripts and you

(05:55):
know management calls on, youknow conferences and stuff like
that.
So the data is raw, it's notstructured, it has not been
analyzed in any way, it's notbeen, you know, easy to search.
That's what I mean.
But you could sort of like samething applies to videos and
audio and whatever.

Speaker 1 (06:11):
But yeah, so tell me what goes into building a
product like this.

Speaker 2 (06:16):
With the current state of AI is interesting.
Right when ChatGPT came alongin November of 2022, it was
quite interesting Like you justtype anything and you'll get
something amusing back to you,and so the technology of large

(06:41):
language models enabled you tothink about a lot of different
things and quickly buildsomething that could show you oh
, this is possible, like a verycool demo.
But as we started working withthese models, we realized that
actually, if you know, we wantto take care of like analysts
right, like buy side analysts,for example it would take like a
lot more than sort of justputting together these APIs, and
we realized that these modelsare not good at domain specific
information.

(07:01):
So if you wanted to ask financerelated questions, they would
make a lot of mistakes, both interms of understanding the
question, but also likehallucinations, which is the
technical term for makingsomething up that doesn't exist
in anywhere.
Right, and language models arenotorious for that.
So our approach was, in havingthe ML background, we built this

(07:24):
retrieval system that has beenfine-tuned and built for
financial domain, and so whatthat means is we have four AI
models that we've built in-housethat understand financial
documents very well.
So when you ask a question.
When you try to understand atable, it just knows much better

(07:46):
about.
Okay, what's being asked?
What is the right piece ofinformation, what's the nuance
between EBITDA or adjustedEBITDA or all these kinds of
nuances that general modelsdon't capture?
And to do that we had to dotraining our models and running
our own GPUs and running our owninference stack, and that has
been, you know, a lot of funbecause, like, you kind of like

(08:07):
uncover these little nuancesthat make you, oh, like, play
around with these models and youkind of learn where they fail
and where they don't fail.
But then building it our ownhelps us, you know, make it
faster and cheaper.
So that's a big part of it.
Like, our sort of like corething is building a retrieval
system that can identify.
When you ask a question or givea task, it can identify what

(08:30):
piece of information do I needto find?
Where do I need to find?
From thousands of documents,essentially.
And so that's been the corepart of what we've been doing so
far.
And then you kind of like layeron top of you can put a chat,
you can put like AI agents ontop of it.
You can do multiple things, butthe core of it is like being

(08:51):
able to find the right piece ofinformation you're looking for,
with confidence and accuracy.

Speaker 1 (08:56):
And you know, I think maybe you could take us a
little under the covers right?
Everybody just talks oh, yougot to train the model, right,
you know, but for something sospecific, like you know
financial services, you knowwhat goes into that training
process.

Speaker 2 (09:13):
It is, like you know, we've been training our models
and you know, you see, you knowcompanies like OpenAI spending
hundreds of millions of dollars,potentially billions.
You know companies like OpenAIspending hundreds of millions of
dollars, potentially billions.
And then there's smaller firmsand newer companies like ours
doing different types oftraining.
So, yeah, so essentially what alanguage model really is like,
the way they're trained is right, you take all the text that's

(09:35):
possible in the world that theycan get their hands on and they
try to feed into this model,which does fill in the blanks.
So you would have an Englishsentence, like you know, the cat
is eating its food or something, and you would like blank out
two words and then you'll forcethe model to predict those words
through this probabilitydistribution of all the words
that are possible.
So initially it's random, it'sjust filling out words, but as

(09:58):
it's training, it tries to, youknow, understand what is the
most likely word after thissequence, right?
So this is called pre-trainingand this is what the most
expensive part of it is.
And so when models are beingpre-trained, where they're
learning, like how to completesentences essentially, but they
don't have any specific domainknowledge about, you know how to

(10:21):
do a financial analysis or whyNVIDIA's certain metrics are
different than AMD and thingslike that, like how one company
is defining net retentionrevenue different than another.
It doesn't have that nuance.
It's general so far.
So to incorporate that specificknowledge for finance or some

(10:43):
team or some company, you needto teach that model.
So you kind of like extend thatfine tuning, like training
process and sort of like zone ininto specific aspects of this
model that you care about, andso what that looks like.
You create a data set of likehey, this is what I want to do

(11:03):
and this is the output, or theseare the things I'm looking for,
and you give a lot of trainingexamples.
So if you're trying to teach it, it's almost like teaching like
a young kid, but with lots andlots of examples.
Like, hey, I want you tounderstand the nuance between
this term and this definition,or how this company calculates
subscriptions, or you know,billings versus this company or

(11:26):
whatever it is, and you justcreate like this data set by
either, like you know, havinghumans or expert analysts sort
of annotate it or doing yourselfor some automated system.
But you know, essentiallyyou're trying to give it more
examples of what you want it todo or where it's failing, and so
that part is what we typicallymean by fine tuning is like

(11:46):
training that last player of thenetwork, to just understand you
a little better on what you'retrying to do.

Speaker 1 (11:52):
You know it's interesting.
As a novice in this world, ifyou may, I was at a lecture a
couple of years ago and it wasfascinating to me how Captivy
sells all its data.
You're going to go buy ticketsto go to whatever rock concert
or whatnot, but it's basicallyhumans teaching the machine

(12:16):
images, which was fascinating.
What is a stop sign?
Where is a bike?

Speaker 2 (12:22):
Which one's an electric pole or something of
that nature, and and it made mefeel a little stupid it is very
similar to that right like yeah,it is for us, like it looks
like we have, we're trying toteach it a very nuanced take on
things that expert humananalysts would expect and want
to.
So, beyond just like, is itgood or bad, you want to

(12:43):
understand why, something likeas you know, if you take the top
analyst, any field or anycompany or any industry, you
want to understand how they do,their reasoning and thinking and
impart that to the models.
So that's where the challengeand the opportunity comes in.

Speaker 1 (12:59):
So, you know, that opens up a whole whole other
argument and discussion of.
You know, are we losing our jobto computers?
But you know, let's leave thatfor a different podcast.
So let me ask you a question.
So your team has two products.
Let's start with the web-basedAI chat tool.
You know, can you give us asituation where in which a

(13:20):
financial professional wouldwant to use this?

Speaker 2 (13:23):
Yeah, yeah, so yeah.
So we have this, you know, beta, open for public, which is like
this chat tool and you canbasically ask any question about
companies, financial questionsabout companies, and what's
really good at is what and youcan specify sources.
You know, if you only want tofocus on SEC filings and
transcripts, you can specify.

(13:44):
If you want to use the web, youcan specify it.
But once you do that, you canask any question and it gives
you, like it can scour all thosedocuments and give you
succinctly the answer that youwant.
And the very nice thing aboutit is it can take you um, all
everything is cited, so like, ifit cites a number, you can
click on that number.
It'll take you exactly wherethat's coming from, even down to

(14:05):
a cell in a table, and that'svery powerful for two reasons.
One, language models are knownto hallucinate, as I mentioned
before.
So if you go to ChatGPD and aska bunch of financial questions,
more likely than not you willfind something that's not true
or not present anywhere.
But the other problem is thefield where we operate in, like,

(14:28):
accuracy is obviously the mostimportant thing, right, Like I
need to be able to trust theoutput.
I need to be able to know wherethe numbers are coming from and
to build that trust andconfidence for the analysts.
We have spent some time inbuilding this where you can cite
everything, which is not thatstraightforward, but we feel

(14:52):
like if you get one thing onetime wrong and you can't verify
it, you're not going to be ableto use any of these tools right.
Like trust is going to be a bigpart of AI adoption across any
industry, but especially for us,because you know, like if you
have to manually just do all thework, you're not, like AI is
not providing value.
So that's one thing where, like,well, somebody can ask you know

(15:14):
, hey, what has been going onwith the litigation of 3M in the
last five years?
They have something going onwith the PUFA or something, and
you can get a very quick answer,versus like reading all last
five years of documents, or hey,why have the gross margins of a
certain company been falling?
And you can quickly get thoseanswers.
And obviously, like simplethings about, like segment

(15:34):
revenues and things like that.
But the other thing we actuallylaunched recently, which was and
we didn't know how popular itwas going to be, like it is, so
basically, a lot of buy-sidefolks have investment thesis
right and after a quarterhappens they want to know hey,

(15:57):
for this thesis, were there anyquestion answers that were
discussed in the Q&A section ofthe call?
So in this tool, basically youjust put in your thesis or
whatever topic and within youknow two seconds you'd get all
the relevant Q&A questions, soyou don't have to dig through
all of them manually and itturns out it's pretty popular.
So, yeah, those are kind ofthings where you want like quick

(16:19):
answers and you want to likedig into a company and even get
some ideas from the AI toanalyzing that or this.
You know QA analysis tool whichis just pop in your thesis and
get back what's the mostrelevant questions from the call
.

Speaker 1 (16:36):
So so forgive me, because my producer is going to
hate me for this bad joke.
So chat gpt hallucinationsclearly needs to stay away from
the digital mushrooms.
However, the question I have ishow do we think about this?
Do we think about this as aproductivity tool, in terms of

(16:57):
of saving time and research, ordo we think about this as more?

Speaker 2 (17:02):
uh, finding alpha right, it's a, it's, it's all.
It's probably the best questionone can ask at this junction of
like, because we're still it'sso new, the technology is being
developed.
It's kind of like it opens thepossibility for different things
.
I think it can be used for bothand it will start as more

(17:22):
productivity, because for thealpha piece you need more
reasoning and you need moresystems embedded in and for the
alpha piece of it, my intuitionis it has to be very focused and
strategic.
That means you need a lot ofhuman input, humans kind of

(17:44):
design, and come up with ideasand sort of help and get help
from AI for executing thoseideas.
And that's a scale and speedthat humans cannot do.
But what obvious and probablythe first step is in the
productivity side.
So whatever you're going to do,ai can help you do it cheaper,
faster, you know better andthat's like the obvious thing,

(18:06):
that's like the lowest hangingfruit.
So I think it'll tag bothmarkets.
My intention is productivity isgoing to happen first and then
it'll kind of like fall into thealpha market a little bit.
But alpha is such a you knowit's very hard right Like it's
very hard, very hard to generatealpha right and part of it is
like it's an art in some sense,like if it was science, you know

(18:27):
you would have figured it out,and so part of that is this
human AI interaction.
So I think people who canleverage AI in terms of like
either more coming up with moreideas or executing them better
than others, I think there'svalue there.
But obviously the first step isjust to get the productivity
layer fixed and you can get asmuch value from AI on that layer

(18:50):
initially.

Speaker 1 (18:52):
You know, in preparing for today's call, one
of the things I read that inpilot is building links into the
output that references theprimary source material.
Why is this unique and why isthis important?

Speaker 2 (19:06):
Right, yeah, so it is very, very important.
For example, take a case ofanalyst, investment banker, sell
side, buy side, right, andwhere the AI can help.
So let's say I'm writing areport or I'm looking at a
company and I want to get a headstart and I have AI do like a

(19:27):
first draft of you know aquarter or some market or some
you know top 20 companies inthis industry sector.
I want a consolidatedinformation and, like an AI has
done all the work and I have areport right Now, given at least
now the way the technologyworks is like, I can never be a

(19:48):
hundred percent sure that theoutputs that have been generated
by AI, like the numbers, thefacts, will always be a hundred
percent correct, right?
So you cannot simply risksending that report to a client
where even one number orsomething is wrong.
Where you know this, you knowmentally that AI can hallucinate
.
You'll see like, oh, can Itrust any of the other facts?

(20:08):
Right?
So to build that trust is supercritical, because that's the
only way to drive adoption of AItools for the analyst.
So our sort of approach is well, we can't change how LLMs are
trained or we don't have thattechnology as an industry yet
where we have 100% confidence.
So it's almost like what can wedo?

(20:28):
That's the next best thing.
And the next best thing is youtry to link and source back
pretty much everything, likedown to every number.
So if you think something'sfishy or something doesn't make
sense or something is verycritical, you can check in a
click right and then okay,that's good with me.
So I think it's very criticalone to have the human analyst,

(20:51):
trust the AI output and drivethe adoption and then go on to,
you know, leveraging all theproductivity benefits.
Without it it's like, oh, it'sa cool tool and I can, you know,
it's a good starting point, butI have to do all the work again
because I can't trust it.
That's, it beats the purpose.
So I think that's why it's verycritical.
It's unique.
I think we've spent a lot oftime on that because early on we

(21:12):
just figured out like, asanalysts cause you know, coming
from the hedge fund world, likeI, if I can't trust it, I just
won't use it.
So, like, being able to sourceeverything back is challenging
in building the right system.
So you need sort of optimizingthe whole stack so you can go
back, flow through your systemand go back to the primary
source document, but I think toanswer your question in one

(21:35):
sentence, it's going to be veryimportant for adoption,
otherwise it will remain in asurface level versus actually
being embedded in workflows thatgive you that 30, 40, 50%
productivity boost.

Speaker 1 (21:47):
Well, and I think what you're driving to is
transparency.

Speaker 2 (21:51):
Exactly right.

Speaker 1 (21:57):
And when I think transparency, it also makes me
think of regulation.
We're in a highly regulatedindustry.
Are there any particularregulatory concerns associated
with using?
You know your solution.
Yeah.

Speaker 2 (22:08):
So I will speak.
So just AI and financialregulation in general, like it
is a big topic, right, and thereare multiple sort of levels to
it.
One is so SEC is looking and Ithink they're trying to
understand, okay, what is theall sorts of capabilities that
AI systems can do and where youknow, like they're trying to get

(22:32):
the map right now, right, andso I haven't seen anything
concrete yet.
But there has been some talkabout being very careful,
especially in the advisorybusiness, right, you can't have
AI systems right now that youknow within the advisory system
that can make somerecommendations, right, like

(22:54):
that is a very, very tricky pathright now, and for good reasons
, right, like because thesesystems are generally
intelligent enough, but then youcan't really rely on something
that important and critical.
So there will be regulation.
My intuition is on that frontand part of that is, I think

(23:16):
firms and enterprises have takensort of like, not a pause, but
they're doing a wait and seeapproach, listening for SEC sort
of like commentary, and so Ithink they will be more cautious
.
Where it's directly interfacingwith investors, right, and how
the outputs of AI go to theinvestors directly in making

(23:36):
decisions.
The other aspect of it is likematerially non-public
information, right.
So if you're an investment bankand you're dealing with a
company that's about to gopublic in three months,
obviously it's super sensitiveinformation, right and risking
that information to LLM whichcan get leaked or can used we
don't know how OpenAI uses it isalso very critical.

(23:59):
So in that approach, I thinkyou might see more private
models come up right which takescare of like everything is
in-house for that bank and theydon't send anything out there.
So it's kind of like you solvethat problem by figuring out
what piece of technology I wantand how can I bring that
in-house.

(24:19):
As for us, I think right nowwe're focused on sort of
research analysts.
We're not facing with, like youknow, a retail investor or
something like that.
So we are focused on as aresearch platform and trying to
being as transparent within theAI models.
It's not a black box at all.
You can, you know, flow throughall the steps that the AI took
to give you certain output.
So I think right at the moment,it's not something that's

(24:45):
blocking us from anything.
But I think, depending on whereyou end up and like what's the
application of AI that you goafter will dictate how much AI
regulations.
You have to, but it is superearly.
I think SEC is also trying towrap their head around, like
like, where should we even begin?

Speaker 1 (25:05):
right, so you know, which also begs the question of
you know, when you think of fastmoving industries, you do not
think financial services.
You know, I think you know westarted talking about movement
to the cloud at the top of thehype cycle in 2012.
And we're only now seeingtrading and risk platforms

(25:26):
starting to move to the cloud.
You know, how do you see, youknow how do you change that,
especially with newer technology.
How do you get people using anddeploying, and even
recommending your product.

Speaker 2 (25:39):
Yeah, no, that's a very good comment because it is
true, you do not start financialservices because it's super
regulated.
It's been interesting because Ithink the value proposition is

(26:01):
so high that it's hard to ignorethat.
And so it's surprising to mewhen we talk to our customers
and when you talk to people likepotential, like new people who
have not even heard about AItools, when we communicate to
them like this is what can do,and we show them like the

(26:23):
excitement is like, oh man, likethis, we could save like three
hours every day for each, everyanalyst, or something like that.
You know, three hours every dayfor each, every analyst, or
like something like that.
And the other type of customersare like they're more excited.
They've kind of, like they know, chat, gpd, they've been trying
to like cajole it into doingwhat they're already trying to

(26:43):
do.
They're more excited than usand they're giving us ideas Can
you do this, please, can you dothat?
And so, like we've seensomewhat of the opposite problem
where, like people are comingto us like, hey, can we do this
now, can we do that?
So it's interesting Now, maybebecause we're not, you know, in
the layer of like superregulated, like we're not
trading right, we're not, wedon't have a fund that we're

(27:05):
recommending right Like.
So I think maybe that isolatesus a little bit at this stage,
but I think as you move across,you know, within the field,
within the domain, to differentareas, you might have to wrestle
with that.
So so far, so you know, one ofthe things that we have a close

(27:25):
beta on is called AI agents,like task agents, where you can
just give like a task.
Hey, I'm looking at these top20 companies in this sector and
I want to do XYZ analysis andyou can have like a pretty
detailed analysis and it justdoes that for all the 20
companies.
Or it can look at, you know,incremental changes in quarter

(27:46):
to quarter about certainqualitative aspects, and that
has been getting a lot ofmomentum and it's like,
basically, because it's sogeneric for the buy side, it's
like, okay, I can just have itdo things for me.
So that is interesting where Iwould be in the camp of like,

(28:10):
okay, how are we ever going tobreak into it?
But it's been sort of like theexcitement has been quite the
opposite.

Speaker 1 (28:21):
So that's been super interesting.
You know, sadly we've made itto the final question of this
podcast because I've got about10 more questions I want to dive
into.
But you know we call this finalquestion the trend drop.
It's like a desert islandquestion.

Speaker 2 (28:41):
And if you could only track one trend in AI
technology, what would thattrend be?
One thing that I am trackingclosely is the latency and the
cost of the powerful models andthe reason that's and this may
not be super important two orthree years from now, but in the
short term it's kind of likereally important because that
allows you to figure out whatyou can do at a sort of like a

(29:07):
speed and cost that makes sense,right, like if you take a week
to do something that may not beas valuable versus a day or two
hours, right.
And it goes back to this AIagents that need to do multiple
things, like hundreds of steps,and like rechecking and
verification and all that, andso being able to go down to this

(29:31):
, like being able to do allthose steps fast and cheaper, is
going to be very critical.
So one thing I am like lookingat these curves of like how,
sort of like how fast inferencetakes on these models.
So it's going goes back to likealgorithmic sort of
advancements and chip like GPU,you know advancements and like

(29:54):
how, through GPU throughputadvancements, and chip like GPU,
you know advancements, and likehow, through GPU, throughput
advancements, all these thingslike which was very low level
things, but it translates in bigway for the application layer
that we're operating in.
So I think that would be onething that I'm very keen on
learning.
And the other thing, if I may,is just the open source models
that are.
They are getting good.
They're not quite there yet,but if they can match certain

(30:16):
quality, that would be a hugewin for the industry, especially
on the private model side andthe regulation side.
So yeah, you asked me for one,I give you two.

Speaker 1 (30:28):
I'll take two Fair enough.
Well, lakshmi, I want to thankyou so much for your time today,
your insights, and I want tocongratulate you on your success
with FinPilot, as well asfuture success.

Speaker 2 (30:40):
Thank you so much, thank you.
Thank you, jim, I reallyappreciate it.

Speaker 1 (30:48):
And that wraps up this season of Trading Tomorrow,
navigating trends in capitalmarkets.
We appreciate your loyallistenership and we'll be back
with Season 3 after a shortbreak.
Make sure you rate, comment andlike our podcast so we can
continue to bring youinformation and chats on the
latest technology changing thefinancial industry.
Advertise With Us

Popular Podcasts

24/7 News: The Latest
Stuff You Should Know

Stuff You Should Know

If you've ever wanted to know about champagne, satanism, the Stonewall Uprising, chaos theory, LSD, El Nino, true crime and Rosa Parks, then look no further. Josh and Chuck have you covered.

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.