Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:00):
Hello everybody, I'm
Alex, ceo and founder of
Marspace, and in this episode,we bring you one of the
conversations we had at theCorporate Innovation Summit.
The Corporate Innovation Summitis one event organized by
Marspace that we have been doingsince 2020, in which we bring
together into a room thefounders of companies like
startups and scale ads, and thedecision makers and big
(00:21):
corporates in the field ofinnovation.
We've been doing this since2020 to bridge the gap between
corporates in the field ofinnovation.
We've been doing this since2020 to bridge the gap between
corporates and startups and alsoto make more business
connections happen.
In this case, we bring you thefirst panel of the event focused
around agentic AI, and we'llhave a very interesting
conversation about the legalimplications, how to make that
(00:43):
happen, whether it's survivingthe hype or not, and the
technical viability to commitfraud and protect ourselves from
fraud as well.
So, without further ado, let'sjump right into this episode.
Without further ado, ladies andgentlemen, let me introduce to
(01:08):
the panelists of this firstsession of the day.
So, people sitting in the frontrow pretty much everybody of
you is in this super crowdedpanel.
It's going to be so interesting.
So please give it up for Asier.
Anthony, carlin, greg and Sarah.
Please join me on stage.
Colin, greg and Sarah, pleasejoin me on stage.
(01:35):
Good, so everybody pick yourseat according to where you're.
No, actually we're missing one.
I'm missing.
There was no space for me there.
I'm not important, come on.
Well, welcome.
We're missing one mic.
(01:55):
I guess we have Not.
Everybody doesn't have one.
Let's kick it off.
So welcome to the show, asier.
I guess most people in theaudience will be like NVIDIA.
Let's talk about stonks.
Speaker 3 (02:14):
Now it's on.
Speaker 1 (02:18):
We could be talking
about the other hyperscalers.
We could be talking aboutwhether Asians are important or
not important, whether they'regoing to be solving what we're
doing or giving us more troublethan they actually solve.
But some people are like areyou going to ask them about
valuation?
And do you feel the pressure ofgetting constantly asked why
NVIDIA single-handedly moves theneedle in the stock market?
(02:41):
And it's not like the bigplayers anymore, it's just
freaking NVIDIA.
Speaker 3 (02:46):
But yeah.
So basically, when you say thatevery company in Mobile World
Congress say that they are thebest one, they have the best
engineers, well, we do so.
I think we are the only one inthat case and, of course, the
market likes that.
I cannot speak about whatpeople are using their money for
, but I think we are doing avery good job.
We try to recruit the besttalent here.
(03:07):
We have some people applying toNVIDIA here in the audience and
basically we started from a.
We started with like, like now.
We started having fun.
We started with video games andthen we realized that that
graphic cards are very, verygood for almost everything, even
simulating quantum computersright now, even creating AI
(03:28):
agents.
So, yes, I think we are doing avery good job and that's why
the market well, the marketpeople likes us.
We also like people and we alsohave a very good internal
culture.
We like to hire the best study,we like to take care of
everyone and I'm really happy atNVIDIA.
Before I was at IBM on Quantum,I was happy Not really happy,
(03:49):
but I was happy, but I was happy, I was happy, it was good, it
was good.
But so the good thing is that Ihave both.
I have seen both worlds theQuantum world, which I love,
quantum world Probably, ifNVIDIA has a quantum computer in
the future will be on quantumagain, because I love quantum.
But also AI is very exciting.
We have news almost every week.
(04:11):
Everything is super exciting.
I recommended this a guy there.
Also we are singing I don'tknow.
10 news per day and each weekthere is something really new
and we will have the time to tofollow this pace.
I don't know if you feel it, buttoo many things are happening
right now.
It's very exciting becausealmost every idea you have now
you can prototype it in one dayusing agents, using chat, gpt or
(04:35):
whatever service you want.
So I think we are living a veryinteresting time right now, for
quantum is the same almost.
Probably in a few years it willbe much better.
We are always waiting for thequantum applicability.
I'm not going to give aprediction on when quantum is
going to be the thing, becauseJensen did that and I'm not
going to do it this time.
(04:56):
So I hope we have quantum soon.
I think AI is going to help tounderstand quantum computers
much better, because now quantumcircuits are not super
understandable and there issomething called the quantum
intuition, which is somethingyou build by interacting with
quantum computers, and I thinkthat's something AI can develop
(05:18):
quantum intuition to createsomething.
Speaker 1 (05:23):
Thank you.
Certainly, I don't know you gotthe best developers, but at
least you got the best mics, soI'll grant you that.
No, there's another questionfor you actually.
So, um, going back to what youmentioned that we can build more
stuff than ever, more rapidlythan ever that creates some sort
of like sensory overload or thefeeling of like getting burned
out by ai right, the feelingthat you can do almost anything.
(05:45):
Right?
20 years ago I couldn't dographic editing, video editing,
writing a novel, creating imagesand maybe even like doing other
crazy stuff, but right now Ican.
So I've got a ton of abandonedside projects.
You're quite the multitaskerand the fire starter.
(06:06):
How are you dealing with thisoverload or this sensation of
having too many projects thatyou cannot handle?
Speaker 3 (06:14):
Yes, so we were
speaking before to talk a bit
about that.
Yes, I understand you.
I try to have a very goodgarbage collector and forget
quickly about all things,because if not, I cannot deal
with with everything.
But it is true that before Ihad more time to to explore
projects and now I don't havethat time because there are too
(06:34):
many things coming and it's it'stime something new comes, there
is a new idea that you want toexplore.
So I think we don't have a lotof space for creativity now.
Uh, and also because I thinkit's not trendy now to get bored
, so everyone is going to thephone all the time.
So we have too many engineerson the phone right now or
dealing with too many projectsand we don't have the time to
(06:56):
stay in a project for a longtime.
I think that's a bit of aproblem right now.
Speaker 1 (07:02):
Anthony, speaking
about learning so many things,
one of the sectors that has gotto keep up with regulation,
technology and what's going on.
It was like lawyers, becauseyou're supposed to protect us
from fucking up right?
So more than ever, you have tocatch up with reality.
So how do you do it internallyto learn about the implications
(07:23):
or like, even like quantum, butagent, so AI and the passive
must have been like blockchain,vr or whatnot.
So all of these like new wavesthat come with, like with a
smaller cadence right now, likethey come more frequently than
ever how do you keep up withthat internally?
Speaker 4 (07:38):
Yeah, we just make
sure everyone's really scared of
what could happen.
Yeah, I mean, we're in astrange place because we dealing
with large language models.
We sell language right.
That's our primary thing wesell.
So on some senses, you mightsay we're screwed right.
So there's a real chance, butit's also very exciting.
(08:01):
We're definitely not boredright.
There's a lot for us to focuson about how we change our
business and how we think aboutthe model of the legal industry
for the future.
And I think yeah, it's kind ofhackneyed to say it, but it's
obvious that, whilst it'sdefinitely a challenge to us
that we suddenly have a toolthat can do a lot of the
(08:26):
certainly the sort of grunt work, the basic legal work that many
of the people have trainedthemselves on, it's also a
fantastic way of deliveringgreat work to our clients in a
super efficient way, and if wecan harness that, if we can use
it in a way that is going to setus apart from the next lawyer,
we're going to have a businessfor the future.
But it's definitely somethingthat occupies our mind on a
(08:49):
daily basis.
We have lots and lots of timeand energy worrying about it.
Speaker 1 (08:53):
So how do you use it
internally?
Have you got any specificexamples about, specifically,
agents?
How do you use it?
Your team, your departments?
Hey, we build this tool, webuild custom GPTs, we build some
agents for you know.
Speaker 4 (09:09):
I mean there's a lot
of paranoia in law firms about
using large language models,because a lot of what we do is
obviously super sensitive.
It's confidential.
What we have to be reallycareful of is allowing client
confidential information to beput into a model where it might
ultimately be available toanyone and everyone, right?
So we have to be really carefulabout how we're doing it.
So there's a number of modelsthat are being sold to law firms
(09:32):
.
One of the most prominent ones,which maybe no one in this room
has heard of apart from lawyers, is one called harvey.
Harvey is a tool that isessentially based on chatT, but
it's kind of like a walledgarden approach, so we can have
confidence in puttinginformation into it that it is
going to remain confidential.
It's trained on our precedents,it's using our voice, as it
(09:55):
were, and we're starting to getpeople to learn how to use this
as a tool for document review,for document creation.
There are some really obvioususe cases.
Due diligence is a very obvioususe case.
It has a tool where you canthrow in a thousand documents
and say tell me where the risksare in each of these documents
(10:15):
and it will produce a verysimple report for you very
quickly.
Equally.
Hey, the lawyer on the otherside has written back to us and
he's marked up the document inthis way Tell me all the key
points and it spits out a niceissues list straight away.
So there are tools like thatthat are very helpful.
There are, obviously there's awhole multiple, a whole range of
different tools that we'reusing.
I'm trying to actually, I thinkthe document review.
(10:37):
Stuff is kind of done in a way.
It's kind of it's a bus flushesin it, sort of it's there, it's
happened, it's changed ourworld and we've got to get used
to using it.
I think one of the interestingthings for me about how we can
use these tools for us to setourselves apart is and people
talk about this all the time buthow do how, as individuals, do
(10:59):
you teach the lawyers to startthinking more laterally about
these tools and how can you use,how can they use them to add
more value to their clients andto add more value to be more
valuable themselves?
So a really simple example Igive to the lawyers in my team
is that if, when you're thinkingabout business development I
was thinking about an event theother day I'm really interested
(11:20):
in neuromorphic chips I wantedto know about neuromorphic chips
.
How could we do an event aroundthat?
So I asked put into one of themodels and said right, give me
the top 200 companies in Europewho are involved in this.
Who is the CEO?
Who's the CFO?
Who's the GC?
Give me their LinkedIn profiles.
Right, who's the mostinteresting talker in this space
right now?
(11:40):
What can they talk about infront of a group 100 people that
will fascinate them?
Who should be on each panel?
I had an amazing event insidehalf an hour that would have
taken our business developmentteam three months to pull
together.
Um, so, using thinking morelaterally about how we can use
these tools to make ourselvesjust that little bit more super
human, I think is a reallyimportant skill that lots of our
(12:01):
lawyers are going to have tolearn.
Speaker 1 (12:02):
Should have done the
same instead of putting in the
three months of work to organizethis event that little bit more
superhuman, I think, is areally important skill that lots
of our lawyers are going tohave to learn.
Should have done the sameinstead of putting in the three
months of work to organize thisevent.
So for next year, there you go,but you'll be on the panel.
No jokes about this.
Moving on, caroline, it's thesecond time Moody speaks at the
event.
Two years ago, sergio was here,but we nerded out more about the
(12:24):
collaboration between AI, so AIstartups and corporates.
In this case, I'm much moreinterested in how do you measure
the productivity?
Is there any sort of specificAPI in the company that you use
across departments, because somuch money has been deployed, as
in like, hey, everybody uses AIright now.
Everybody uses Copilot,everybody uses Raycast AI,
(12:46):
everybody uses whatever.
Then you find out that, no, nota lot of people use it.
Not a lot of people areactually making the best usage
of it and you're overpaying,right?
Have you identified in whichparts of the workflow
departments is best suited forthat?
I'm not sure if my mic is on.
Speaker 2 (13:05):
No, if not, you get
mine.
So it's a very good question,and I know we were told not to
say we do the best things, butyou can come on.
But I think we were very well.
I know we were very early tothe party on Gen AI, so early
doors, 2023.
Actually, you know, sergio, whoyou hear from later and our
team built Moody's co-pilot.
(13:26):
We were pretty original withour naming and initially our
KPIs were actually aroundadoption and all we wanted was
just please come and use ourplatform.
This would be brilliant, and wespoke a lot about having 14,000
innovators, and so that was ourentire workforce.
We wanted them to actually useGen AI in a safe way.
(13:48):
We wanted them to becomfortable with it and that was
kind of, you know, of utmostimportance.
But then you sort of get into2024 and you're like, okay,
where's the tangibility here?
And I think that's where a lotof organizations either.
Maybe they didn't get there in2024, but they're definitely
getting there now, and so wehave a very active population.
(14:09):
And the reason for that wasthere was sort of endorsement
from above, from the CEO of thecorporation, who did role
modeling, like showing what, howhe used it.
We embedded it actually intotools people used.
So we embedded it into thingslike Teams and Slack and had a
web interface.
We continued to evolve it aswell with lots of new skills and
(14:30):
we did a lot around education.
But I think we then had to lookat actual workflows.
It wasn't so much individualshaving to actively go to a
destination to use Gen AI, buthow do you understand a workflow
and have the subject matterexperts working with our
engineers and finding the bestsolutions.
(14:50):
We built out specific tools,like I think a lot of
organizations have around.
We built a customer serviceassistant.
So specifically looking at whatare our biggest job populations
in our organization, where arethe biggest opportunities that
we can identify and match Gen AIcapabilities with, and then
(15:11):
basically building out the tools.
But I think one of the reallyimportant things here, if you're
at this stage in yourorganization, is how do you
actually benchmark?
If you don't do a benchmarkfirst, then you kind of don't
know what the win was.
And I think one of the otherareas is as well.
We had a lot around accuracyand I think people have moved
(15:32):
through this now.
But I think early doors, thoseexpectations of a hundred
percent accuracy and we knowhumans we like to think we're
perfect, but we're infallible.
You know we are fallible and sothat was another area that we
started to kind of benchmarkwhat's a human performance and
so what would be acceptable forGen AI in that context, and it
(15:55):
depends on what the task to bedone is, of course.
So I'd say those were sort ofareas.
We, of course, you know, we seewith our developers that they
they're able to get a lot ofbenefit out of github, copilot,
and that's been kind of quitephenomenal as well.
We're seeing with sales teams,we're seeing with legal teams
(16:16):
and I think, as you start to seesome of the skills improve, you
know, particularly around data,the ability to compare
documents rather than just havechunks of documents, some of
these things actually open upthe world of opportunity for
internal efficiency.
So pretty broad.
But I think the tangibility isthe bit that we're really
(16:36):
doubling down on now and tryingto say did it bring value?
Because I think the marketitself is questioning that quite
a lot.
Speaker 1 (16:42):
Did it bring value
Because I think the market
itself is questioning that quitea lot, and actually you went
over it very rapidly, but so youbuilt your own co-pilot.
Right, we did.
That's something that you saidlike, oh we, we built a co-pilot
.
Speaker 2 (16:52):
I didn't personally,
but we did.
Speaker 1 (16:56):
No, I mean the
naming's great.
I mean CERCIO has got worseideas than this naming, so no
worries about that.
But what I wanted to stop hereis because a lot of people have
always had this kind ofinformation available.
Right, you've got your CRM,you've got your ERP, you've got
your data room.
Whatever Truth is, you neverused it because it was scarcely
available.
It was costly to.
Maybe.
You needed an engineer, youneeded to build an API, you
(17:17):
needed to do something.
It was expensive.
Now that it's there, a lot ofpeople are not using it because
it's so easy to access.
It overloads you withinformation.
Are you encountering thisproblem that the excess of
transparency is also causingproblems you didn't know you
would have with that?
Speaker 2 (17:34):
I mean, maybe I'm not
answering exactly your question
, but I personally feel that thefuture of Gen AI should not be
us actively, it shouldn't beabout adoption and it shouldn't
be people actively having to useGen AI.
I'd like it to be passive.
(17:55):
I'd like us to predominantly beembedding it into workflows.
I would like your co-workers toessentially be agents.
I'd like you to have humanco-workers as well, but I think
that as we move into agenticworkflows, that you won't have
to actively go and write aprompt, we can actually work out
(18:15):
how to solve specific workflows, but also unique workflows and
personalized workflows usingagents.
So I think, longer term, thekind of need for the majority of
individuals to actuallyproactively go and write a
prompt and do lots of steps Imean, that's the whole evolution
(18:36):
of Gen AI that we're seeing atthe moment is moving beyond that
kind of assistant intoautonomous workflows.
So autonomous assistance.
Speaker 1 (18:48):
Great, thank you,
craig.
Let's Great, thank you, craig.
Let's move over to you Also inthe financial sector.
Right, financial sector hasbeen an early adopter of AI and
quantum, probably because thekind of problems that you're
trying to tackle are very, verycomplex.
They require large sums ofinvestment, big teams, and I
don't know, like, if you cantalk, what?
Can you talk about the processthat you're personally involved
(19:10):
on and how this kind ofinvestment in bleeding edge
technologies affects your hiringstrategies?
You're able to attract moretalent so that you can you're
able to compete with NVIDIA'sbest engineers in the world?
Oh God, I'm not going to answerthat at all.
Speaker 6 (19:24):
So I have no idea.
So what but I'd like to doubledown a little bit on.
This is that we also createdwhat we called ours initially,
because I was in the automotivearea of S&P.
We called it Autopilot, so alittle bit more original.
And then we vibed off of thekit car from David Hasselhoff
fame and we even had it, so thatit did the woo-woo thing.
(19:44):
It was really cool.
But what we decided very earlyon is that no one knows what
this technology means.
So, even though we're alltalking here, we're all here
because of, for the most part,generative ai later quantum, I
know, but we're all here becauseof generative ai and trying to
figure out what does it mean interms of these legacy stacks
(20:06):
that we have?
So we, very early on, decidedokay, there is no clear-cut way
that we're going to deploy thisbleeding-head technology.
We need to get it into thehands of the people who have the
problems, and so we don't thinkof it so much as being the best
engineers, but instead, how dowe become the people who are
(20:26):
delivering the water supply to agarden?
So we then, once you figure outhow to irrigate generative AI
to everybody, make it free, makeit really accessible, make it
safe so that they don't have toworry about leaking tokens out
into the Internet.
Once you have that, then peoplebegin to build, and so that is
(20:50):
more of the strategy, instead oftrying to figure out how do we
get super good engineers, itturns out the problems are more
info, security, user experience,ui, and so it's not so much
that we're trying to find peoplewho are experts in large
language models.
Our first problem was justtrying to find React developers.
So that's a completelydifferent way of thinking about
(21:14):
it.
I let the folks who are usingNVIDIA chips make the LLMs
better every other week, andinstead we concentrate on how do
we get this technology into thehands of the people who have to
figure out how to use it.
And then one other crazythought If we were to shut down
all large language modelstomorrow, I think for the most
(21:34):
part all of our companies wouldbe fine at this point, but if we
were to take away Excel fromall of us, we would collapse.
So we are on this long road totrying to figure out how do we
use technology, put it intothese old, old workflows and how
do those transform over time.
So I feel I'm more like someonewho's just supplying water to a
(21:58):
garden, rather than somethingmore sophisticated than that.
Speaker 1 (22:02):
How do you do, like?
You mentioned a funny that youmentioned the the aspect of
cybersecurity or security, likedata compliance, data leaks and
whatnot, and that's part of theconversation.
We're talking about AI.
We see these major scandalswhere, like big companies
leaking data oh, this was usedhere, or we found out that this
company has been illegally usingscraped data and whatnot.
(22:25):
What kind of strategies do youhave to produce your guardrails?
How do you evolve it?
How do you make sure it keepsup with the pace of the market?
Speaker 6 (22:35):
Oh, I mean, we
started with that, so the first
thing we had to do is make surethat it complied with everything
we have in terms of ourregulation compliance and
information security, but that'ssomething that we're already
very good at, so we already haveexisting structures, we already
have existing lawyers.
We have everything we need tomake sure that we're compliant
and that we're secure.
The other thing is like we takeeverything onto our own
(22:55):
infrastructure.
Once you have that, then therest of it is these other things
that are new, and so the newstuff.
There isn't a playbook to go on, so that's why you really have
to get it out into the hands ofthat segment of your population
that is eager and innovative.
So how do you enable theinnovators within a large
organization?
Speaker 1 (23:16):
Sarah.
Let's move to you.
After your experience of havingworked in some of the
hyperscalers, now you'recreating a venture studio.
You have created a venturestudio you have created a
venture studio in and, finally,one of the things I like more is
the one of the things you'repassionate about is the open
source principles of governanceand how that applies to ai,
because I didn't see this coming, but a lot of corporates are
(23:39):
actually giving to open sourcelike they have never before,
right, and so what has been yourrole up until now and how do
you see this going?
Speaker 5 (23:48):
in the land of open
source plus ai.
Um, there's been some reallyinteresting work over the last
18 months about what it means tobe open source and ai, because
um meta came out first and saidwe are open source ai, but that
was before it was reallyreasoned over in any sort of way
, Because when you look at amodel now that's more akin to a
(24:11):
binary output from a compiledsoftware project and so looking
at, is just licensing the binaryor, in this case, the model,
open source enough?
Because the principles and thefreedoms behind open source is
that it's reproducible, you canderive new works from it, you
(24:34):
can inspect the code and you canwork on it in conjunction with
the other authors, and I don'tknow of very many open source
models that meet thoserequirements from the very high
level.
Now the open source initiativehas spent the last year and a
half working on and theyreleased it in October the first
(24:55):
version of AI, of what is opensource AI, and it speaks to it
in terms of a spectrum, so some.
To be perfectly open, you wouldalso have to have your data
licensed as open.
You'd have to have theinfrastructure that builds the
model licensed as open andavailable publicly both of those
so that someone could reproducewhat was built and could reason
(25:20):
over it, could amend it, couldchange it.
So there's been a lot ofdiscussion about that and the
spectrum that the OSI hasbrought forward.
There are some I'll call themdogmatists about open source
that are not keen on the factthat data still has to be
obfuscated in a lot of cases.
(25:40):
But it's an ongoing process andthis is just the first version.
Speaker 1 (25:44):
And the other thing I
wanted to commend with you is a
lot of people.
You know the talk out there isusually how can you use agents
to do like, oh, sales goals andsome processes are too complex
and we're not to make moreefficient workflows Okay, but
one of the underlying truths ofEfficient AI is we have to
protect ourselves from theseagents because they're being
(26:04):
used for scams, fraud andwithout like.
Every day I receive a freakingSMS from Coinbase, like your
account was accessed from thisrandom country out there.
Call us immediately and give usyour credentials right.
Passwords please, passwords,please.
And if they record you, theycan actually make a call with
your voice because they've beentraining your model with the
(26:28):
podcast that you have spoken inand stuff like that.
Can we talk a minute?
Let me stop a minute or twohere to talk about this, because
you look very passionate aboutthis yeah, um, securing agentic
ai is a really big, openquestion right now.
Speaker 5 (26:42):
um, many of the um,
oh, so many different directions
to go with this.
One of the projects that I workon now is called the Coalition
for Secure AI, and, while it isnot exclusively focused on LLMs,
it is actually hyper-focused onhow do we make sure that we
take existing controls andextrapolate them to correct
(27:05):
usage in LLMs and other types ofAI, as well as what are the new
controls that we need todevelop for things like agentic
AI, because you didn't used tohave to have your piece of
software be able to call alawyer and say, am I allowed to
do this?
But your agent might need to.
So what are those controls thatneed to be brought into AI in a
(27:28):
meaningful way in order to makeit secure?
We have four different workstreams right now.
One of them is looking at thesoftware supply chain security,
which is actually also datasupply chain security for these
models.
One of them is looking atpreparing our defenders for the
new world that has AI in it.
I've heard some defendersrecently say I have to treat any
(27:50):
AI model as a hostile entity inmy network because I don't know
what it's going to do.
I can probabilistically say itmight do this, but I still need
to mitigate.
We have one work stream that islooking at the security risks,
the new and varied securityrisks of them.
This sounds terrible, but it'sall stuff that needs to happen
with every new piece oftechnology, by the way, so it's
(28:31):
not sectors as well as, or workstreams as well as.
What does it really mean to beagentic, and how does that
meaningfully work within ourstructures and corporations
today?
Speaker 1 (28:44):
And coming from open
source.
I don't know about you, but Iwould feel way more secure if we
didn't allow the big tech todictate the principles of where
AI is going right.
They can put the technology,they can make the progress
happen, but it should be moredemocratic.
I don't know if you had anythoughts on that.
Speaker 5 (29:03):
This is one of the
reasons I am very active in the
Coalition for Secure AI because,yes, it has a lot of big tech
companies helping fund it andparticipating, but it's doing
all of this work in the public.
So if you want to comeparticipate and say, wow, that
work stream paper that you wrotecompletely misses this chunk of
my worry.
Please come participate,because we need more people from
(29:26):
the outside helping us reasonover this and decide what the
new model the new models sorrythat sounds what the new
frameworks are, as to how weaddress these, these concerns,
and and how we move forwardsafely with our technology.
Speaker 1 (29:42):
How do you keep up?
So, basically, last week it wasa whirlwind of big news dropped
by Microsoft, Amazon NVIDIA.
Every big player droppedsomething Like you know the new
quantum advancements byMicrosoft, the new models by
Amazon.
Nvidia announced the NIMs, ifI'm not mistaken.
(30:04):
The NIMs, yes, the NIMs, right.
So can you talk about that andhow to keep up with it?
How do you make it so that wecan keep up with all of these,
knowing that it will be obsoletein a week?
Speaker 3 (30:13):
Yes, so basically you
only need to remember two names
is NIMO and NIMS.
Basically, nimo is where youhave some data and you want to
train a model with that data.
Then let's imagine you want totrain a model for finance.
You have all the finance data,you do the fine tuning and then
from nemo you can create a verysmall package called a neem that
(30:36):
then you can use it as a piecefor an agent, that an agent can
call that neem and use it.
So basically you can.
You can use the neems that wealready have.
We have plenty of neems fordifferent things for text, for
voice, for video and you canalso create your own name based
on on your data.
So basically that's the way weare dealing with agents or
agentic workflows.
(30:56):
We have this called names andit's quite easy to deploy
because we have everything todeploy it almost in one click.
You need to read a bit throughthe documentation and almost
everything is kind of free.
And also, when you were speakingabout open source, a lot of
them are based on open sourcemodels.
So, for example, we haveNimotron, which is a fine-tuned
(31:18):
version of Lama, which is anopen source model, and we are
also very supportive with theopen source community.
So we have also the Jetsondevice, which is before I was
working in the NVIDIA roboticsteam in NMEA, and we have also
the Jetson device, which isbefore I was working in the
NVIDIA robotics team in NMEA,and we have these small devices
which are embedded devices forrobotics, for robots, and almost
all of them are based on opensource software and there is a
(31:41):
big community on open source andwe love that community.
Of course, everything is notopen source at NVIDIA, but we
are trying to move little bylittle closer to the open source
community, because we know thatAI revolution has come from the
open source Papers researchers,universities so we are trying
to give back to that community.
Speaker 1 (32:00):
If anybody on the
panel wants to think about one
big fuck-up they've done withagentic AI.
While we do a round-off one ofthe things I'm most interested
about and a lot of people in theaudience are like yeah, that's
fine, but how do I work withthese people, how do I work with
S&P, how do I work with NVIDIAand all that?
If you can say really quicklyhow to get in contact with you,
then we move to each one of us.
Speaker 3 (32:19):
I'm pretty public, so
it's very easy.
You can find me on.
Yes, basically, if you have anyreally good idea or workflow
that you want to test, let meknow and I can put you in
contact with anyone in NVIDIAfor that, and also there's some
people here if you want to speaklater from NVIDIA.
(32:41):
So there is also the EMEARobotics woman here sitting
there.
So if you have any projectrelated to robotics, there it is
.
If you have any project relatedwith AI agents or anything, let
me know and I can move thequestion to anyone almost any of
you.
(33:01):
So use that you are a few people, so we can work directly here.
Speaker 1 (33:05):
Tony.
Speaker 3 (33:06):
Thank you.
Speaker 4 (33:09):
The question is how
to get in touch with me.
Yeah, exactly, and why wouldthey?
Why on earth would you call me?
Speaker 1 (33:19):
When you fuck up.
You will really need thatcontact.
Speaker 4 (33:23):
Yeah, I'm the guy in
the firm who's helping founders
grow and scale businessesthrough venture finance and
hopefully ultimately helpingthem secure an exit.
You can find me on the websiteAnthony Waller at CMS.
Tony Waller at CMS Easy.
We're open.
Very happy for you to get incontact.
Speaker 2 (33:45):
Helen, you can find
me on LinkedIn as well, although
there is a very successful TEDxspeaker and I'm not her with
the same name.
She looks a bit similar to meas well, so it's a bit of a
problem, but other than that, wealso have recently joined Finos
and the co-pilot Moody'sco-pilot that we talked about
(34:07):
earlier.
We're working with Finos andthe co-pilot Moody's co-pilot
that we talked about earlier.
We're working with Finos andmaking that code available for
organizations who literally kindof want to leapfrog in their
journey in Gen AI.
So have a chat to us afterwards, but find me on LinkedIn, and
Sergio and Robbie are here fromMoody's as well.
Speaker 1 (34:25):
Thank you.
Speaker 6 (34:25):
Right, greg Mount,
also on LinkedIn.
There's another Greg Mount.
He runs a hotel chain.
That's not me.
I can't get you a hotel room inPrague.
I'm no longer S&P, I'm nowKensho.
Kensho is owned by S&P, so it'sjust an AI area within S&P.
(34:46):
I'm very interested in the factthat there are not that many
people who seem to be talkingabout applied AI at businesses.
So I hear a lot about the tech.
I hear a lot about the chips.
I hear a lot about differentapproaches.
I haven't heard as many storiesout there about how are we
applying this at businesses orhow are we looking to help
(35:07):
businesses apply this technology.
I think the change managementthat's going to be happening at
organizations is going to beimmense, and I think the way
that businesses structurethemselves is going to change
dramatically, and we're justbeginning to figure those topics
out.
Speaker 1 (35:21):
Speaking about
competition on the names, my
surname is Rodriguez, so AlexRodriguez, most famous baseball
player of all time, according tosomebody who doesn't like
baseball because outside of theUS, who does?
South Korea?
Speaker 6 (35:32):
What's that?
South Korea, oh yeah and.
Japan.
We have Japan here.
Japan baseball, tokyo Dome.
Speaker 1 (35:39):
Amazing.
Someday I will be overbeatinghim on SEO, but it takes a while
, right.
Speaker 5 (35:47):
Sorry, so I'll point
people to LinkedIn as well.
That's super easy and I usuallycome up in people's searches,
but you will need my emailaddress.
And GenLab is a small enoughventure studio that it's just my
first name, excuse me atgenlabstudio, so I can be found
there.
I can also be found anybodyelse didn't mention this on
(36:13):
Mastodon, which isSarahahnavotny, at
mastodonsocial, so I can befound there.
And then was there another oh,how or why might you contact me?
That would be why, as a venturestudio, what we actually look
(36:38):
to do is make very smallinvestments in existing
companies that can becomebuilding blocks for real
problems.
So we should talk for ourstrategic investors in the fund.
So generally, as a venturestudio, we start with a problem
and then we try to work to solveit, and so then we spit out
companies a little bit later, weattach founders a little bit
later than usual.
So we're not usually funding ahoodie and a founder or a hoodie
(36:59):
and an idea, but instead we'rebuilding minimum viable products
and then saying oh, alex,you've been working with me on
this and you've been a greatadvisor.
Do you have an interest inmaybe being CTO over there?
So we bring people in along thepath and then put founders on
later.
(37:19):
So if people are interested intalking to me about really world
use cases or have reallyinteresting building blocks that
might go into solutions thatare focused on critical
infrastructure and the techintersection of AI, distributed
(37:40):
autonomous systems andcybersecurity.
Speaker 1 (37:42):
Awesome.
Thank you very much.
Well, sergio is going to becoming here to prepare the demo.
It might take a couple ofminutes.
If somebody wants to share afuck-up, somebody has thought
about one of them.
Anthony, last year you sharedone.
You're not forced to do it thisyear again, but somebody wants
to share a fuck-up or a funnystory with?
Speaker 5 (38:00):
Jen AI, I was going
to say, but lots that aren't AI
yet.
Speaker 1 (38:04):
Okay, might be
something else, but it has to be
yours, it's a long career.
Yeah, exactly, yeah, else, butit has to be yours.
It's a long career.
Speaker 4 (38:13):
yeah, exactly, uh,
yeah somebody wants to go for it
while he rebirths a coupleminutes, I can tell you about
one of the worst days um at work, which wasn't really my fuck up
, it was kind of my team everyyear so I was.
This is really early on.
So when I, when I first joinedthe firm um, it was dot com,
right, who remembers that?
That was a long time ago um,and I was acting for one of the
very early travel businesses, um, that was selling to another
(38:35):
travel business to make a travelbusiness that you will know uh,
today.
Um, and the business I wasworking for, um, some of the
shares that they were sellingwere bizarrely represented by
bearer bonds right, people whoknow those.
So they were selling werebizarrely represented by bearer
bonds.
People remember those.
So they were bearer shares.
So the shares didn't existother than a piece of paper.
(38:58):
If you didn't have the piece ofpaper, you didn't have the
shares.
So they lived in a bank, in asafe, very secure, no problem.
And we were completing on aSunday because, because you know
why not, that's what founderslike to do is, uh and um, and I
had to arrange for a secure vanto bring the share certificates
(39:22):
to the meeting.
That was my one job, I was veryjunior, all right, so I
arranged it, got a secure van toturn up the meeting.
Everyone was supposed to bethere for 11 o'clock nice
civilized completion on a SundayI got the croissants and the
coffee out and everyone was veryhappy.
After about an hour, everyone'ssort of looking at each other
saying, well, where are they?
(39:42):
Where's the certificates?
No, the croissants were staleby that point, but they were
there Two hours, three hours, noshare certificates.
So I started slightlyfrantically calling the security
team and they didn't know wherethis guy had gone in the van
and eventually I got a verysheepish call back to say he
(40:05):
won't be turning up today withthe share certificates.
I'm like, well, why is that?
Yeah, he was held up and wasrobbed at gunpoint.
Speaker 1 (40:17):
I thought you had
forgot to ask.
Speaker 4 (40:21):
And they were gone.
They were never found again.
It was an awkward conversation.
Speaker 1 (40:31):
Quite the.
Speaker 4 (40:31):
Sunday.
Honestly, someone must haveknown it was happening.
That's the other weird thing.
What was weird is they neverturned up with them.
You thought they might haveturned up and sort of taunted us
at the window, but no, theyjust walked off.
They must have thought therewas something else in the van, I
don't know.
But anyway, there we are.
Someone else's fuck-up, but itmade me feel pretty bad.
Speaker 1 (40:55):
Please.
But anyway, someone else's fuckup but it made me feel pretty
bad.
Please give it up for theamazing panelists and for
everything they've shared.
Thank you very much.