All Episodes

January 27, 2025 39 mins

Mandeep Singh, Global Head of Technology Research at Bloomberg Intelligence, joins the podcast to explore how massive investments in artificial intelligence made over the past few years could drive future innovations and profits. From humanoid robots to self-driving vehicles, Mandeep shares his expert analysis on the potential impact of these technologies in the years to come.

----------------------------------------------------------------------------------------------
Subscribe Here to the ROI Podcast & other First Trust Market News
Website: First Trust Portfolios
Connect with us on LinkedIn: First Trust LinkedIn
Follow us on X: First Trust on X
Subscribe to the First Trust YouTube Channel
Subscribe to the ROI Podcast YouTube Channel

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Ryan (00:11):
Hi, welcome to this episode of the First Trust ROI
podcast.
I'm Ryan Isakainen, etfstrategist at First Trust.
For today's episode, I am veryexcited to be joined by Mandeep
Singh, global Head of TechnologyResearch at Bloomberg
Intelligence.
There is a lot going on in theworld of technology.
Mandeep just returned from theConsumer Electronics Show in Las

(00:34):
Vegas.
We're going to talk all aboutinnovation.
We're going to talk aboutartificial intelligence and
where some of the opportunitiesmay be for those companies that
are investing heavily in AI inthe years to come.
Thanks for joining us on thisepisode of the First Trust ROI
Podcast.
Mandeep, it is great to meetyou, to put a face with a name.

(00:54):
I, of course, have seen a lotof your research on the
Bloomberg Terminal and you arefor those that are watching and
you are, for those that arewatching, the global head of
technology research at BloombergIntelligence, and before we
came on, I was asking you aboutthe Consumer Electronics Show in
Las Vegas.

Mandeep (01:17):
You were there last week.
Yes, it was really, you know,quite a spectacle in terms of
the number of launches,especially the focus around
robotics.
And really, I mean ConsumerElectronics Show is always
interesting because they try todeploy a lot of the technology
we hear about into these cooldevices.

(01:39):
So, yes, quite a show.

Ryan (01:42):
So I'm sure you've been before it's something.
I've never done, is this likehave you been going for, like
you know, a long time?

Mandeep (01:49):
Yeah, I mean there was a gap during the COVID phase.
I didn't go for two or threeyears, but I have been going
there a long time and I wouldsay this year's show was more
interesting than the 2024 show.
Year's show was moreinteresting than the 2024 show

(02:09):
and the reason I say that isbecause of all the new things
that I guess were talked about,from robotics to self-driving
cars and in general.
You know the emphasis on usingLLM technology, the large
language models and generativeAI across a broad swath of
devices, and we did put out along report on our takeaways

(02:34):
from the show, so happy to getinto the details, but a lot of
new stuff around LLMs and AIagents.
I think that was a big focusthis year.

Ryan (02:42):
That's great.
Agents.
I think that was a big focusthis year.
That's great, yeah, and Idefinitely want to talk more
about LLMs, about AI and reallywhat actually is going to
generate profits.
I mean, there's been massiveinvestments made in the
infrastructure and thedevelopment and the build-out,
so I think that's definitely atopic we want to dig into.
So, before we do, though, 140plus thousand technology people

(03:07):
in Las Vegas what is that like?

Mandeep (03:09):
Well, I mean, look, these are people from different
backgrounds in terms of theirinterests around the show.
Not everyone is looking to, youknow, analyze companies or look
at investments.
Some of them are there just toformulate those partnerships

(03:30):
that can help incorporate thelatest technology in their
products, and some of them arelooking for product ideas.
So a lot of people, you'reright, and they are very
high-caliber people in terms ofthe exhibitors and what they
know, what they are trying toshowcase, and that's what makes
it interesting.
You have to pick your tracksand focus on, you know, things

(03:51):
that you care about, but, at thesame time, you can come across
people that are very hard tofind otherwise.

Ryan (03:57):
So was there any one thing that you can think of, whether
it's AI related or not?
That just kind of blowseveryone's mind at the show this
year, like what's the coolestthing that you saw at the CES
this year?

Mandeep (04:10):
Yeah, I would say the coolest thing was a demo for a
robot.
You know that, again, therewere a lot of robotic demos, but
I think, just in general, thisrobot could interact with you.
It could really give you asense of the future in terms of

(04:30):
humanoid swarm factor and whatit could do in terms of you know
, having the level ofpersonalization and engagement
and like it could understandemotions, it could read your
body language.
It could really help you dosome of the things in your home
that were otherwise.

(04:50):
You know you're doing itmanually and I think it was.
Maybe it's not going to hitmainstream this year or next
couple of years, but it kind ofgave you a glimpse of what's
possible using AI.

Ryan (05:03):
So we're like, so we're living in the era of the Jetsons
.
Finally, we've got Rosie themaid.
So I was thinking one of theareas that, because I'm trying
to imagine having a robot in myhouse and I'm really not sure
exactly what it would do tochange my life, to incentivize
me, you know, to incentivize meto make that investment.

(05:25):
But then I was thinking aboutall the help that people need as
they get older in terms of, youknow, home health care, aids,
that sort of thing.
That seems like that would besomething you'd want a robot for
.

Mandeep (05:38):
Yeah, look, and what the LLMs have showed us is these
LLMs have knowledge of theinternet, knowledge of the world
that's all digitized, and ifyou distill that knowledge into,
let's say, smaller models whichcan run locally on a humanoid
form factor or any other type ofedge device, it can be quite

(06:03):
powerful because it canunderstand general instructions.
It can, you know, fetch youthings in terms of, you know,
doing a mundane task that itcould be trained on.
And you know it wasn't possiblebefore, because when you look
at Alexa or some of the prior,you know conversational devices.

(06:24):
They didn't have that level ofAI embedded into them that they
could be generic or they can gobeyond talking about weather.
So I feel, with AI, we havecome a long way in terms of
training these chatbots, to apoint where they can be quite

(06:44):
intelligent in terms ofunderstanding human language,
and then they can be trained ontasks that are repetitive.

Ryan (06:52):
And.

Mandeep (06:52):
I think that was the takeaway over there.

Ryan (06:55):
So it seems, as I've watched and listened to
companies as they disclose theirfinancial results and they have
their conference callsafterwards, that everyone really
wants to be part of the glow ofAI and large language models,
and I've often wondered you know, where does the line get drawn
between when something becomesAI versus, just maybe, some

(07:16):
evolution of technology gettingbetter?
Do you have any way that youthink about that?

Mandeep (07:21):
I mean, the best example I can offer is
self-driving cars, right?
So we've all heard about Waymolaunching in five cities last
year.
They're doing about 200,000rides a week, now 150 to 200,000
.
And look, when you think abouthow that inflection point has
come.
And Waymo is not all generativeAI, I mean, it's a complex

(07:45):
system that's augmented by, youknow, llm technology now, but
you need almost 100% precisionbecause you're talking about,
you know, somebody sitting inthe car trusting that system to
drive safely and that's where,if it can solve that problem, if
AI can be deployed for aproblem that requires 100%

(08:07):
precision, then you feel like itcan do a lot of other things.
And that's where, at the CES,you could see humanoids being
possible now, because we havesolved the self-driving problem.
I mean, if you're talking abouta scale of 5 million autonomous
rides a year, then I think thesystem is ready and the

(08:30):
technology is there.
I mean, yes, there will alwaysbe edge cases that need to be
solved for and addressed, but Ithink we all can agree that you
know a lot of people aretrusting these systems every day
and riding, and Zooks had ademo in Las Vegas where the show
was, where you could ride fromairport to the convention center

(08:52):
in an autonomous vehicle, andeveryone trusted that system and
it worked beautifully.

Ryan (08:58):
So anytime there's this new.
I started my career rightbefore the internet bubble burst
and so maybe I'm a little bitoverly sensitive.
I'm always looking for, I'malways concerned and worried
about hype cycles and you knownew technologies come out, you
know they're going to bedisruptive and you know they're
going to change things, but I'malways worried about hype cycles

(09:19):
and where we are in the hypecycle.
So when it comes to, you know,some of these models, some of
these LLM models and AI ingeneral, where do you think we
are in that hype cycle?

Mandeep (09:29):
I mean still in the early innings, because when you
look at GPUs and the generalavailability of accelerated
computing, the technology isquite expensive to deploy and
it's very evident from thespending of these hyperscalers
in terms of their CapEx spend,and the ROI is still kind of not

(09:54):
very well established in termsof what kind of returns these
companies are getting on that AIspend.
With that being said, there aresome noticeable use cases, you
know, when it comes to thedifference this technology is
making, and so I wouldn't callit all hype, because there are
tangible kind of proof pointswhen it comes to this technology

(10:19):
, whether it's on the adtargeting side or, you know,
having campaigns created by AI,or just, you know, synthesizing
kind of intelligent summariesfrom a bunch of documents, or
generating things based onprompt.
All this wasn't possible before, and these are the things that

(10:42):
can make a difference in termsof how we work on a day-to-day
basis, how much productive wecan be as knowledge workers, and
I think that's the potentialthat everyone feels there is
with generative AI and LLMs andcustomer service.
I mean, I came across a lot ofexamples at the show around

(11:03):
deploying AI agents.
Now, ai agents are nothing, but,you know, chatbots that can do
things end-to-end in terms ofhaving a conversation, taking a
follow-up action, because theyhave all the knowledge you know
from the internet as well as youknow at the enterprise level,

(11:24):
because they have been trainedon those sort of documents and
they can find that needle in thehaystack faster than a human
agent can.
And so that's the promise.
But look, there will always bechallenges.
Can we overinvest?
In the near term?
It's possible, but at leastright now, from what I'm seeing,

(11:45):
we are still supply constrainedwhen it comes to these chips
and we are no way close to beingon the side of having an
overcapacity or things sittingidle.

Ryan (11:57):
So I definitely don't see that for now, do you have a
sense of where, in terms of howmany years out the spend, the
capital spending, will need toslow down?
Because you know it'll increasebut it can't increase forever.
So at some point it needs toslow, and I just don't know when
and I don't think anyone reallyknows but do you?

(12:18):
Have a sense of when that couldslow.

Mandeep (12:20):
I mean I'm looking for that digestion phase, which I
think will come in the next two,three years, where companies
that are spending.
I mean I'm looking for thatdigestion phase which I think
will come in the next two, threeyears, where companies that are
spending.
You know, Microsoft said theywill be spending $80 billion in
AI CapEx this year big number.
And when you think about thatscale, you know, going forward,

(12:41):
it starts to eat into all theirfree cash flow that the company
is generating.
So investors will want to seehow much that's contributing to
the top line growth.
They've said AI inferencing isa $10 billion run rate business.
That's expected to double, inmy opinion, to $20 billion this

(13:03):
year.
And look, I think as long asthey keep their transparent with
the investors on how that'sbeing used, and AI training and
scaling laws are a big factor inthis, because if the next
version of the model is at least20% better than the previous

(13:24):
version, then you know thatintelligence is being created
using the model training and socompanies have incentive to
trend.
As the moment we start to see aplateauing of the improvement
in LLMs, then that's where youcould expect a pause, but right
now I feel companies are beingvery creative when it comes to

(13:48):
training these models.
There is, you know, scaling atthe time of inferencing, so
that's the other thing that iskeeping interest high when it
comes to some of the novelapproaches that these companies
have come up with.
But digestion in CapEx, and youcould call it a slowdown, but

(14:09):
that's where you could expectthat to happen in the next two,
three years, because I don'texpect years of 50%, 60% CapEx
increases to continue for thenext two, three years.
There will be a flat niling.

Ryan (14:25):
Most of that CapEx spend has been from the largest
companies in the world.
Essentially those are thehyperscalers and building out
data centers.
Is that where the spending willcontinue, or will it flow
downstream to smaller companies?
Or is there really no incentiveto just not use the services
that some of the data centershave already set up?

Mandeep (14:48):
I mean, look, I think right now the constraints are in
multiple places.
So, yes, the data centercapacity is a constraint, but
also the power, and we know thepower infrastructure is not easy
to expand.
You really have to makelong-term decisions in terms of
how do we build data centersthat can have 10 times the power

(15:12):
that they have right now?
And it's not easy because thepower doesn't grow like that.
It grows more in line with theGDP, that it grows more in line
with the GDP and, plus thesechips, they need more power for
training.
So that's the other constraintyou have is the infrastructure

(15:35):
accompanying that data center,whether it's on the cooling side
or the cables that aretransmitting the power.
Everything needs to be changedNow, whether we start using
nuclear or other options.
I mean all that is on the tablebecause the clusters of these
chips are growing.

(15:55):
I mean Jensen talked about, youknow, an AI factory with one
million chips, you know, puttogether.
Right now, the largest clusterwe have, or we have talked about
, is 100,000 GPU cluster.
So to 10x.
That requires a 10x in power,which I don't think we have that

(16:15):
available right now.
So there are all thesepractical constraints that
require CapEx spend on differentfronts, not just in terms of
getting data center you know,real estate and getting the
chips, but also all theaccompanying infrastructure, and
I think that's where differentcompanies will have to

(16:36):
participate.
The governments obviously willbe involved, and that's why this
is a much bigger theme thananyone imagined, at least a
couple of years back.

Ryan (16:47):
Yeah, the power.
I'm glad you brought that upbecause that's something that
we've often wondered about.
It seems like nuclear powerwould be a great solution and
obviously you know Constellationreopening Three Mile Island to
supply some power to Microsoft,you know that's something that
can be done relatively quicklywithin a few years.

(17:07):
But you know, you look at themost recent built from scratch
nuclear plant in Georgia.
It took something like 13 or 14years to get that built and it
was like $30 billion plussomething like that.
And I know they're smallmodular reactors but those
aren't really commercialized yetand you know they seem like
there are ways out.
So this is something that Ijust wonder where the power is

(17:31):
going to come from.

Mandeep (17:32):
Well, and that's the million dollar question in terms
of how do we go about, you know, adding that power and you know
how quickly can we do it?

Ryan (17:45):
Because it's a race, isn't ?

Mandeep (17:47):
it.
It is a race, but the onlycaveat I would throw in there is
if the model stops scaling.
So all this is contingent about, you know, building a million,
one million chip cluster.
Why do you need to build a 1million chip cluster?
Because that could help trainyour next version of the model,

(18:09):
which is smarter than theprevious version.
Now, if all the experts see, oh,we are plateauing in terms of
model intelligence because weare running out of data, and
look, all these models aretrained on the entirety of
Internet data that's availableright now.
So, granted, the amount of datawill grow, but it's not going

(18:29):
to grow at the same pace atwhich they've trained these
models so far, because itincluded the entirety of
Internet.
Now, the pace of growth is muchslower than all the data that's
available.
So that's the big risk is canwe rely on synthetic data, which
is a term that's used a lot, iswe can curate data to train

(18:53):
these models or we can useinference time scaling, where we
basically use a differentapproach in terms of how the
model answers the question andit can use different paths to
answering a question as opposedto just one prompt-based

(19:15):
response.
So all this is dependent onintelligence continuing to scale
.
Now, as we know, you know,machine learning was there
before large language models andgenerative AI came to the scene
.
And what was great aboutgenerative AI and
transformer-based models was itwas much sophisticated than

(19:38):
machine learning, which requireda lot of supervised training.
This could be unsupervised, youcould pass it, you know large
amounts of data and it still hadthe potential to generate a
model which is in billions ofparameters.
But it scaled beautifully.
So that's, the scaling.
Laws are probably the singlemost important thing when it

(20:00):
comes to carrying this wave andkeeping that spend and interest
in deploying generative AI.
The moment we start to hearabout a plateauing of that,
that's the big risk.
That's when everything willkind of come to a pause.
I mean, I'm sure governmentsand companies will keep spending

(20:22):
in terms of adding anddeploying that infrastructure,
but that sense of urgency, Ithink that will go away if
everyone realizes there arelimits to how much these models
can scale.

Ryan (20:33):
So you mentioned synthetic data.
Is that just basically deriveddata that is produced and then
you've got some conclusionthat's been reached and you
assume that that's the data thatyou then build upon?
Is that kind of?

Mandeep (20:45):
Yeah, I mean, basically , this is not data that's coming
from any transaction.
So a good example would be Imean, Waymo and Tesla have
collected real miles of databased on the, you know, the
algorithmic driven driving thathave been done on the roads, and
I mentioned five cities forWaymo, same thing for Tesla, FSD

(21:07):
.
Now, if they had to use someother type of data that is
currently not generated fromreal life studies and you know,

(21:36):
like if you're studying aprotein structure, these are the
different combinations that youcould use to train an AI.

Ryan (21:42):
So like the AlphaFold project, is that considered like
synthetic data?
Yeah, I mean exactly.

Mandeep (21:51):
And so, yeah, I mean you consider all permutations,
combinations, and at the sametime you have to weight it to
real life data more, becausethat is the data that has been
observed.
But then if you have to coverthe edge cases, the AI has to
know about all the otherpotential combinations.

Ryan (22:10):
Yeah, I think the whole biotech area of AI is
fascinating and it seems likethere's a lot more that can be
known, that's unknown comparedto you know.
It seems like you know an agentthat I call up because I need
my flight changed, or somethinglike that there's some
diminishing returns to makingthat better and better.
You know it gets to a pointwhere it's you know it's good

(22:30):
enough, right, but when you'retrying to come up with a cure
for diseases, I mean, andunderstanding how biology works,
that seems like there's a lotmore that you could actually
understand.

Mandeep (22:41):
And I would keep the expectations low.
I don't think AI would help youfind cure for diseases.
It's more of an augmented likesomething that will augment your
knowledge, work, whether you'resitting on a desktop and doing
your work and beat any kind ofwork.
So if you're a researcher, AIwill augment your research, but

(23:01):
I doubt it can find cure fordiseases on its own.

Ryan (23:05):
Yeah, it's a tool, right?
Yeah, it is a tool, because ifyou have that alpha fold library
of proteins, then you don'thave to spend your months and
months discovering what thatprotein three-dimensional
structure is, because you're 99%with that part of the puzzle
there.
But then you have to dosomething with that.

Mandeep (23:24):
Yeah, and also it can understand that lingo a lot
better.
One of the things we've seenwith AI agents is these agents
are trained on all theconversations people have had
over the years, whether it'saudio or text conversations, and
then they can understand whatyou are asking now as a result

(23:45):
of that, because of all thewealth of training data.
So it's the same concepteverywhere.
These agents build on what hasalready been observed.
I mean, think of the differenttypes of customer service
conversations we have in ourlives.
So if AI can you know, learnfrom that, and they know how we

(24:06):
ask a question and what are thetypical responses.
From an enterprise standpoint,that's huge and the scale of
these models allow them to learnfrom pretty much any and every
type of conversation because ofthe number of parameters I mean,
we are talking about 400 plusbillion parameters when it comes
to, you know, LAMA model orsome of these frontier models

(24:29):
and that's what gives them thatwealth of knowledge in terms of
understanding what someone isasking and it seems like they
have to.

Ryan (24:38):
I have had the pleasure of being on a number of calls with
customer service agents, andsome are really good, Some are
really bad, so there has to besome sort of fine tuning that
goes on as these models arelearning right.

Mandeep (24:51):
Oh yeah, I mean that's where all the time is spent.
I mean, otherwise these modelswould be ready, to, you know, be
deployed for a wide variety ofuse cases right now.
The reason why an OpenAI modelis still not production ready is
because it needs to befine-tuned according to your

(25:12):
particular use case.
If you're an airline, you haveto fine tune it with your
customer service data set.
If you are into other types ofuse cases, whether it's sales or
service desk, I mean these aredifferent types of conversations

(25:33):
.
So an LLM has that genericknowledge but it needs to have
that customized, fine-tunedversions of the conversations.
And then you have to accountfor the hallucinations and make
sure the responses are in linewith your compliance goals.
So all that is kind of veryiterative and that's why it

(25:56):
takes time to deploy.

Ryan (25:57):
Is that the most difficult part?
You think of developing aworkable large language model?

Mandeep (26:03):
I mean, look, I think that's what the companies are
spending time on, and some ofthem, because of the compliance
aspects, will take longer justto test it out.
And others are more out therebecause, you know, even if an
algorithm or, you know, a chatbot cannot answer all the
questions perfectly, they'reokay with that because they just

(26:26):
want to deploy this technologyfaster.
So that's where you know everysector is different.
Every, you know, customerservice use case is different,
but I would say it's veryiterative.
There's no way you can thinkabout all the edge use cases
right from the get-go and youhave to, you know, iterate on it

(26:49):
.

Ryan (26:49):
That's very interesting.
So I wanted to ask you a littlebit about regulations and it's
shifting gears a little bit but,this is something that always
comes up, and you know it'ssomewhat related to concerns
about what decisions AI is goingto be allowed to make at some
point in the future and whatpotential harm could come from

(27:12):
that, and balancing betweenadding too much regulation so
that it stifles innovation but,on the other hand, making sure
that safety is concerned.
So do you have any thoughts onI don't know where you think
that's going to go, or where itshould go, even yeah, look, I
mean, one of the things aboutthese models is the prompt can

(27:34):
be anything.

Mandeep (27:35):
You know every company talks about multimodality, which
is basically you can givetext-based prompts, video, audio
, and then the length of promptshave really grown many fold
since the time the first versionwas launched, so you can really
game the chatbot to kind ofgive responses which obviously

(27:58):
haven't been tested for.
And that's where you know,putting those guardrails is
paramount.
So every regulator, I'm sure,will be focused on what sort of
guardrails these LLM companieshave to implement, and then the
companies which are deployingthese LLMs in their products
will do the same from their side.
But I think the AI Act in theEU and there's so many different

(28:24):
approaches that the regulatorsare thinking about.
Obviously we have a newincoming administration here in
the US.
I would be interested to seewhat the AIs are.
I think David Sachs would belooking at in terms of deploying
AI.
But nobody wants to curtail theambitions of these companies

(28:45):
when it comes to AI, which iswhy the hyperscalers are
spending so aggressively,because I feel they are
confident that the rightguardrails can be implemented
and obviously these companieshave a lot of data to customize
these LLMs.
I mean.
Think of the hyperscalers ashaving the data that you need to

(29:07):
customize the LLMs, and so theyare the right partners when it
comes to deploying thistechnology, and they have vested
interest in terms of makingsure regulators are convinced
that these LLMs can function ina way where it's not putting
anything at risk and won't causeany harm when it comes to

(29:29):
sensitive intellectual property.

Ryan (29:31):
It seems like these companies that are making these
massive investments.
That seems like it's a prettysecure moat.
I don't know how you could havea new competitor and add to
that the potential for someregulatory capture.
You know where you're puttingin regulations because you can
comply with them, but there's noway a startup is going to be
able to.
Do you agree with that?

(29:53):
Do you think there is a prettywide moat for these?
It doesn't seem like newcompetitors can come in, at
least for the hyperscalers.

Mandeep (30:02):
I mean, what's good is, if you think about even the
hyperscalers, not everyone hastheir own large language model.
So Microsoft is relying onOpenAI, which is a company with
an independent $150 billion plusvaluation Anthropic recently
valued at $60 billion on its own.

(30:22):
Now it's partnered with Amazon.
But that's the good part aboutthis technology is we've already
picked some winners and the twonames I mentioned.
And then you have got Mistraland Cohere, so there are a lot
of companies that are trainingtheir foundational models.
And then you have got Mistraland Cohere, so there are a lot
of companies that are trainingtheir foundational models and
then partnering with thehyperscalers.

(30:43):
Now you also have a couple ofhyperscalers that do everything.
I mean, Alphabet has their ownchip, they have the largest data
set and they have their ownlarge language model.
So I do think every company hasto be looked at it from a
different lens, but the factthat we have got, at least you
know, four or five foundationalmodel companies, and even though

(31:05):
they are partnering with thehyperscalers, that's a good
thing.
I mean, these are already, youknow, large companies when it
comes to their private valuation.

Ryan (31:14):
What about intellectual property when it comes to just
kind of using everything on theinternet, and I know that,
especially certain forms ofmedia, if you're doing something
creative, making movies orsomething like that, but even
getting news from news sources,I'm not really sure what to
think about that.
To be honest.
Any thoughts on is there apotential that some of these AI

(31:40):
companies will have to pay somesort of licensing fee to the
companies that are producingnews or something like that?
I mean, I don't know, I'm noteven sure what my question is
for you.

Mandeep (31:49):
Yeah, no, actually, so that has already started.
If you think about OpenAI andAlphabet, they are paying for
content, right?

Ryan (32:00):
now.

Mandeep (32:00):
OpenAI is paying for New York Times.
I know Alphabet has a bigcontract with Reddit to pay for
their content, and so all theseLLM companies I mean, look the
way they train their originalversion of their large language
model.
I'm sure they used a lot ofcontent which wasn't licensed

(32:21):
properly and that's how theytrained their first version and
that's how the model got so goodin terms of understanding
everything on the Internet.
But going forward, and giventhese companies are well
established now, they are payingfor content and I'm sure
that'll be the case more andmore, because that's how you

(32:43):
keep the model up to date, aswell as there's more awareness
about how important the originalcontent is when it comes to
deploying generative AI.
So these companies that owncontent and have the
intellectual property for thatcontent are very conscious of
the fact that they need tomonetize the content through

(33:05):
licensing now.
So no longer can you scrape awebsite or some other source to
train.
That was the case, maybe, a fewyears back, but now it's not
possible.

Ryan (33:15):
The case, maybe a few years back, but now it's not
possible.
So put yourself five years inthe future.
What is there a specificapplication that you are most
excited about that you thinkfive years from now, we're going
to be like man, this is, thisis life changing.
This is so disruptive.
You know, I think of what theiPhone turned into, I mean the

(33:36):
smartphone industry.
We didn't know five yearsbefore the iPhone that that was
going to be a thing, but it useda lot of technology, used the
internet and so forth.
So five years from now, isthere anything you can think of
that we'll look back and justsay, man, that changed
everything.

Mandeep (33:53):
Yeah, I think we probably are in one of those
moments, simply because thetechnology has so much
underneath in terms of newcapabilities and look, our form
factor for smartphones and PCsmay evolve as a result of that.
I am excited about thathumanoid device or robot,

(34:15):
however you may want to call it,because if it can do those
mundane tasks at home, I'm surepeople will be willing to pay
for it.
So, self-driving cars, I mean.
Look, I think we enjoy ourdriving, but at the same time,
when it comes to driving, wheneverything there are long lines

(34:35):
and traffic no one enjoys that.
So if there was a way to trustthese autonomous vehicles for a
large portion of our driving, Ithink people wouldn't mind doing
that.
So all these are changes that Ithink will be a result of LLMs
and generative AI and theadvancements in AI, and that's

(34:58):
what makes me excited about, youknow, the changes going forward
.

Ryan (35:02):
So that just brings to mind if I'm riding in a car, no
one's driving it, I'm relying ona system to drive it.
It makes me worried aboutcybersecurity.
Should I be worried aboutcybersecurity Because if someone
can hack in that car and thatcar and wants to crash me into
another car, I guess?
My more general question,though, is yeah, do we need to

(35:24):
be worried about cybersecuritywhen it comes to some of these
applications?

Mandeep (35:27):
I mean look, there will be more data generated from all
these different machines, andthat was the promise of IoT and
big data, but I think more sonow, given we are talking about,
you know, ai that will permeateour lives, and I think
protecting the identity,protecting the data, is always

(35:51):
very important when it comes tothe cybersecurity side of things
.
I mean, there is a race goingon between the nation states
when it comes to developing AIand deploying AI and so that you
know whether it's competingwith China or any other nation.
I think that will be paramountand, as part of that, you have
to protect your intellectualproperty if you're a nation that

(36:13):
is investing a lot of dollarsin R&D and developing your
technology.
So I think cybersecurity hasalways been important when it
comes to digital information,given it's going to grow many
fold with these technologies, Ido expect the importance to grow
All right, I'm going to throwyou a curveball here as we start

(36:36):
to wrap things up.

Ryan (36:38):
One of the questions that I've really enjoyed asking over
the last year and a half, sincewe've been doing this podcast,
is okay.
So let's imagine you've been atBloomberg Intelligence for a
while.
Let's say you went a differentroute and you weren't in finance
, you weren't an analyst.
What do you think you'd bedoing right now?

Mandeep (36:55):
I mean, I would love to be involved with using these
LLMs for developing anapplication and really
reimagining how we can makebetter software or more
intelligent systems.
I think it's fascinating yeah.

Ryan (37:11):
Yeah, very cool, all right , final question for you Any
book recommendations, if people,that doesn't have to be related
to technology.
What are you reading these days?
Is there anything that you'veread or that's on the Mandeep
Singh book list as you?
Maybe that you could recommendto viewers of the ROI podcast?

Mandeep (37:31):
I mean, I read a lot of journals.
I don't get time to read a lotof fiction, so I don't have any
good recommendations, but reallyyou know a lot of journals
related to tech, and that'ssomething that I really enjoy
because they come up with a lotof cool use cases that I can't

(37:54):
imagine on my own.

Ryan (37:55):
So you know, it's almost like that science fiction.
Honestly, I think we're livingin an age that's very close to
science fiction, or at least ifyou were to tell people 20 years
ago some of the things that arebeing developed today, it's
basically science fiction from20 years ago and my hope is it
doesn't get into like theTerminator you know the

(38:15):
Terminator where the machinesturn against us.
I don't think we have to worryabout that, do we?

Mandeep (38:21):
No, it'll be a lot slower than what people expect.
So, look, I mean, we willcontinue to talk about
infrastructure and then, when itcomes to that great use case or
application that will comeabout in the next two, three
years, but we're still runningmainframe systems, you know, for
a lot of the criticalapplications.

(38:41):
So these sort of things, eventhough you know they are
transformative, it takes a lotlonger to deploy these
applications and bring about thechange.
But at the same time, I meanthat's existential, so you can't
really ignore it.
At the same time, it's notgoing to just change everything
overnight or?

(39:02):
You know people are smart, theyhave.
You know, the ones who aremaking these decisions know the
pitfalls of these technologiesand I have faith in you know the
companies that are in charge,that they will make the right
decisions.
And, look, the governments willhave the oversight and the
regulators will do their thing.
But on the whole, I'm hopingfor a more productive future.

Ryan (39:26):
Well, that's a great place to leave the conversation.
Mandeep Singh, global Head ofTechnology Research at Bloomberg
Intelligence, it's been greattalking with you.
Maybe we can do this againsometime, but I really
appreciate your time and thanksfor all of you who have joined
us on this episode of the FirstTrust ROI podcast.
We will see you next time.
Advertise With Us

Popular Podcasts

On Purpose with Jay Shetty

On Purpose with Jay Shetty

I’m Jay Shetty host of On Purpose the worlds #1 Mental Health podcast and I’m so grateful you found us. I started this podcast 5 years ago to invite you into conversations and workshops that are designed to help make you happier, healthier and more healed. I believe that when you (yes you) feel seen, heard and understood you’re able to deal with relationship struggles, work challenges and life’s ups and downs with more ease and grace. I interview experts, celebrities, thought leaders and athletes so that we can grow our mindset, build better habits and uncover a side of them we’ve never seen before. New episodes every Monday and Friday. Your support means the world to me and I don’t take it for granted — click the follow button and leave a review to help us spread the love with On Purpose. I can’t wait for you to listen to your first or 500th episode!

The Breakfast Club

The Breakfast Club

The World's Most Dangerous Morning Show, The Breakfast Club, With DJ Envy And Charlamagne Tha God!

The Joe Rogan Experience

The Joe Rogan Experience

The official podcast of comedian Joe Rogan.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.