Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:01):
So, welcome to another episode of Startup to Scale Up Gameplan.
Today's special guest is Jan Leschel. Jan's a serial entrepreneur,
a visionary leader, and currently the founding CEO of Probable,
a startup revolutionizing decision-making through artificial intelligence.
(00:21):
So, Jan, welcome to the Startup to Scale Up Gameplan. Hello, Gary.
Thanks for having me. Just to kick things off, Jan, can you share with us some
of the pivotal moments in your career that led you to founding Probable?
It goes way back. I started coding, I was 10 years old, and I've never left the world of software.
(00:42):
And so I would say that is the first foundational moment in my career when I
was acquainted with computers that had barely one kilobyte of memory.
So at the time, the prompt, you know, people talk about the prompt in modern gen AI with ChatGPT.
Well, the prompt at the time was a literal prompt. It was facing you, glaring.
(01:04):
With, you know, no intent to do anything except wait for you to program the machine.
So that became my hobby and became my career in many ways. So that was the first foundational moment.
There are many, but another one was in 1992 for my final year thesis at the university.
(01:27):
And so I decided to investigate deep learning and the world of AI at a time
where deep learning was not at all popular, probably set to fail.
Many people disregarded it and preferred symbolic AI instead of connectionism and deep learning.
And so I dabbled in deep learning and wrote my own code for forward propagation,
(01:51):
backwards its propagation and you know fast
Fourier transforms to analyze speech
and so that didn't work quite well but anyways so the the theory the foundation
was there but it it was there before in the 70s or even before and it became
something actually quite impressive only recently in the last 10 years.
(02:18):
So second foundational moment. And then there are other moments,
especially in the last two decades, where I've been an entrepreneur in tech in Paris,
where I've come to realize that it is extremely difficult to compete in a world
where essentially the copy has been dictated by Silicon Valley and Silicon Valley
(02:42):
has created a sort of gigantic value capture system that works really well.
And so we have big tech, and then there's the rest of us.
And so my new gig, my new mission is a result of that, realizing that we have
to compete differently.
(03:03):
Tell me about the core inspiration behind Probable itself, and how do you see
your business transforming decision-making processes in different industries?
So Probable is a project that came to me and I have no merit for some of the
key assets that are involved.
(03:24):
Part of the of the venture this
company is built as a result of a mission given
to us by the french government essentially addressing
a topic that is already covered which is
machine learning through open source so there's
a key technology known as scikit-learn scientific
(03:46):
toolkit for machine learning this technology has been
downloaded 1.5 billion times over the
past decade 68 million times a month 22 us
25 china 5 uk 4 germany 3 france but this technology technology is foundational
and every data scientist on the planet uses it so the french government realized
(04:08):
that the the key people were working in a research lab this technology was critical
to a lot of machine learning out there,
although I'll say this with humility and realism, this is boring AI.
We're talking about AI for math, as opposed to AI for literature,
which is what Gen AI is all about, or ILMs at least.
(04:31):
Anyways, so this technology is foundational, and the goal is to actually build around it.
And so we have a mission that is now my mission, which is to build a suite of
open source technologies around psychic learn and to turn it into technology
that can be adopted by the entire humanity.
(04:52):
So it is some sort of dual play where we are a for-profit company.
So we have to build solutions and services that we charge for to sustain the
core mission, which is to also distribute open source technologies to enable
the world to take advantage of data. Thank you.
(05:13):
Which is exactly the question that you asked. In other words,
how do we enable value creation from data?
Well, that is precisely by distributing open source technologies.
But also commercial solutions that go with it.
So it's a combination of both. And this world operates as much with open source,
if not more, as it does with proprietary technologies.
(05:38):
So if you think Linux, Linux is nowadays probably powering most of the computers
in data centers worldwide.
Microsoft, of course, is quite prevalent still. So Windows is used also in data
centers, but it's used mostly for laptops.
And there's also the Apple Mac OS for laptops.
(06:00):
But in data centers, Linux is powering all of it, if not the majority of it,
that's for sure. So open source is part of the equation, and the two cannot
exist without the other.
Companies build proprietary software on top of open source, and open source
is a reaction to too much concentration.
So that is actually the beauty of it. And some people actually are radical,
(06:24):
either radical open source or radical proprietary.
And I believe in a world where the two actually coexist. exist
peacefully is not the term they coexist because they are
seen symbiotically connected and are
there specific challenges in commercializing your
product because of the open source element walk me through that how you're able
(06:47):
to both offer an open source approach and make serious money it is a challenge
but some companies have succeeded rather well one key example is typically Red Hat.
So Red Hat is a source of inspiration for us because it has kept producing code
(07:08):
towards the community by reinforcing the Linux kernel.
So typically Linux remains accessible and free to use by almost anyone.
However, they also built a.
An operating system called Red Hat for the enterprise market,
and they build extra features and extra services around it. So they did both.
(07:33):
They contributed to the common good while building solutions that are what companies need.
So that is one source of inspiration. There are many other great companies that
have built also different sorts of service layers or software layers,
Typically MongoDB, Elasticsearch,
Redis, HashiCorp.
(07:56):
So there are many variations, and the difficulty is managing expectations.
Because the product, when it's
open source, initially doesn't have a community, so there's no stakes.
It's not, you know, claiming that your project is open source means absolutely nothing.
If you're alone with your open source, nobody cares. However,
(08:18):
as soon as you have opened up and you've managed to attract the interest of
a community, then the community has expectations.
So the question is whether or not you keep the promise of the openness of the source,
and the openness and the freeness of the model, which is a license associated
(08:39):
to it, or whether you do a radical flip at some point.
And so some companies have flipped the license,
the terms, therefore deceiving an entire community, while others are building
proprietary software on top of a source, not deceiving anyone,
but charging a lot of money for it.
So in reality, there's a whole spectrum between the two.
(09:04):
And what I've come to realize recently by spending a lot of time with open source
communities and talking to leaders building open source companies is that there
is not one size fits all model.
Model, every single open source company is different based on the nature of
(09:25):
the open source project, the size of the community,
the segment that it represents, the personas that use it, whether it's an end
product and user product, or it's a middleware,
it's B2B or B2C, and so on and so forth, then there is an infinity of variations
for open source companies to thrive.
And I find that quite refreshing in some ways, because commercial companies
(09:48):
are usually always the same.
There's only a handful of business models that are quite boring.
And in fact, I would say that even the most acclaimed business model for software
today, software as a service, even that has reached limits.
Buyers are fed up with layers and layers of software as a service.
(10:09):
So I think we've reached perhaps even a plateau in terms of the expectations
for software as a service.
Open source, not. No plateau. It's an infinite amount of possibilities.
Okay. So with all these possibilities, what are the challenges that keep you
up at night and how are you addressing those probable? So I refer to this notion
(10:34):
of expectations, right?
To me, that is the one thing that is critical here.
In fact, in an open source project, managing expectations is more crucial than
managing expectations,
say, for a traditional startup that has shareholders and a board where every
(10:58):
once in a while, every month or every quarter, order you meet and you reset expectations or not.
With open source, actually, every commit, every change in the code base reshuffles
expectations, because there's a lot of transparency.
So that is the one thing that keeps me up at night, because here in this particular
(11:21):
case, again, as I said, I have no merit for scikit-learn, which is a...
You know, sort of patrimonial piece of IT that belongs to the world already
because the license is BSD3.
So there are millions of data scientists that wake up in the morning,
have their coffee, and install scikit-learn.
(11:41):
So that I have to protect, and I do have to manage expectations.
In other words, building the company around scikit-learn is no small feat.
Of course, I have shareholders, I have co-founders.
I do have to manage their expectations as well. But the greater good and the
greater community has higher expectations from us.
(12:01):
So to me, that is the one thing that is critical. And if I do good by them,
then we'll do good. We'll do okay.
So what are the key metrics you're using to measure the success of Probables solutions?
And can you maybe share an instance where these metrics led to an unexpected insight or even a pivot?
(12:25):
So it's a bit too early to talk about that because the company is recent.
So the structure itself, Probable, was created less than a year ago,
but we only started working as a group in February this year.
So we spent quite a bit of time to do the discovery process,
process, talking to prospective clients, talking to future partners to find
(12:48):
out how data science is done in 2024, or rather what's not working in 2024.
Because companies have a lot of pressure from their own investors managing expectations.
They need to optimize processes to decrease cost, to improve margins.
And so data science is there to help. help data science optimizes all of that.
(13:11):
So they have expectations from technology, from tools, and we are a provider of such technology.
So the KPIs.
Is one way to look at it when you're a startup building a new product you operate
using okrs objectives and key results so okrs are better than kpis because kpis
(13:32):
you use when when things are.
Rather stable right so let's say you have a factory and you're churning goods
out of the factory the kpis are used to keep a quality level right so you shall
not go under a certain threshold.
KPIs are good for that when you know exactly what you're doing.
(13:53):
So some of the KPIs I'm looking at for the open source community is,
are we maintaining a good quality?
Are we getting more inquiries and more issues?
Or is this fairly stable? Are we reducing the amount of issues?
So you can look at KPIs for existing projects because
the trend is something that is basically baked
(14:15):
into the expectations so kpis is for
things that exist now building a new product you're looking
for objectives which is to address a first pain
point key results do you have people who actually want to use the product and
want to pay for it that's the first step and second step once you've got a few
users are they using it daily right so objective adoption and usage and then And then finally,
(14:41):
you look at the usual sales pipeline and so on and so forth.
So I would say that we are both a company with existing metrics to look out for.
This is to answer our mission with regards to the open source technologies that
we maintain and develop.
And then any new product, any new
(15:02):
value proposition follows a different framework, which is more OKR based.
But can you provide any examples of probable solutions being applied in the
real world and the impact they're having?
So this product is being built as we speak. We started coding a few weeks ago.
(15:22):
So while psychic learning is adopted, as I said, by pretty much everyone,
the question we ask is, how can we help our persona, in other words,
the data scientist, improve their ability to do their work?
And data scientists in many ways still function in a very artisanal way.
(15:43):
And I do not say that in a derogative way, but they are hired to look at a problem
that is usually unexplored in the company.
So they do what they can. Sometimes they're given an Excel spreadsheet or even a text file to look at.
And the question that is being asked to them is to make sense of this data stream
(16:05):
to actually look for patterns,
look for ways to extract a model because our world is more anchored in tabular data.
So contrary to, say, Gen AI that deals with text data, in our case,
we look at tabular data, often quantitative data.
(16:26):
So the question is, can we run through the data and come up with a prediction engine?
Can we look at a log of financial transactions, which has been marked as being
either fraudulent or legal or legitimate,
in which case we can possibly derive a fraud detection engine out of it?
(16:48):
And so once data scientists do that, the question is, how does this go into production?
How do we compare models? How do we validate the model with with management, and so on and so forth.
So we're looking at this specific problem, right? They use psychic learn.
Psychic learn works. It just works, right? So it does the work for them to come
(17:09):
up with a model from data sets.
However, the one thing that is not so trivial is to actually interact with management,
interact with peers, collaborate around models, archive, create traceability, create introspection.
How do we create the conditions for the entire pipeline that is both a mixture
(17:32):
of humans and data sets and models, how do we turn that into business productivity?
We MLOps, because the MLOps field is quite busy already.
There are many solutions and
we are 10 years after, or 10, 15 years after the explosion of big data.
So, we're looking at this particular moment in the entire pipeline,
(17:53):
which we call pre-emet ops, where we seek to augment the data scientist.
Who is using scikit-learn so it's the extension of scikit-learn and create a
suite of solutions that actually help them perform better at their job and also
we want to have their back because,
(18:15):
the more we go forward and europe is certainly very
much concerned about that some people will say that
europe is too concerned about regulation but europe
is concerned that ai models
machine learning models create black boxes
that are detrimental to citizens and
so there is a huge liability component
(18:37):
that is being built into the
regulation so that companies building machine learning out of data while using
machine learning out of data to build models that will infer certain things
therefore possibly discriminating humans down the line can we go back to the
(18:58):
model Can we go back to the person?
Can we go back to the assumptions that came up with this model?
And so we're building a lot of stuff around that. So it's a bit too early to
disclose, but that's the gist of it.
I'm increasingly interested in the overlap between AI and emotional intelligence.
(19:19):
Do you see a future where AI can effectively replicate or at least augment human
emotional intelligence in decision making?
It's a difficult question.
I'm a bit pessimistic because if you look at the world we live in,
if you take the subway, the tube, the bus, you'll see pretty much everyone staring
(19:44):
at their screen, but they're They're not staring at their screen reading,
they're flicking through content that is.
Not useless, but content that does not teach them anything.
So the notion of human augmentation comes with a cost, which is to build ignorance at the same time.
(20:06):
That is the danger. It's a slippery slope. And reversely, we could think that AI,
especially in a new type of AI, so not the one I'm focusing on,
but the one that is buzzing at the moment,
Gen AI, Gen-AI tends to be built on top of typically LLMs, large language models,
(20:28):
or very large language models that have been built by capturing knowledge,
taking all of humanity's knowledge that has been written or even spoken,
because we can also now transcribe.
And so that knowledge is being compacted into gigabytes,
(20:50):
but that black box, and that is literally a black box here, captures knowledge
and renders us lazy and potentially ignorant.
And so that is even more dangerous because if we do not protect the young generation
from that, how do they flex their muscles?
(21:13):
How do they build the number one muscle that we need, which is to be curious, to learn.
And build on top of knowledge day after day in that critical period between
three years old and 20 years old?
(21:33):
That's why I don't know how to answer your question, and I'm afraid that augmenting
humanity will come with some sort of attrition,
some sort of dysfunctional aspect, or at least breaking from the sort of animal
that we've been for certainly centuries, if not millennials.
(21:54):
So I'm worried about that.
At the same time, technology is what we are also.
We're a species that actually is able to build technology.
So that's part of our programming as well. And that's where maybe we go further
past this nostalgic age where we were a learning species,
(22:19):
where we were even, you know, who is to say that writing needs to remain?
At some point, we were a species that did not write.
What if AI turns us into a species that only experiences with the addition of
technology and no longer writes because it's oral tradition again,
(22:42):
all over again, and we're all about experiences? I don't know.
So we can talk about singularity. We can talk about that end point where we
actually augment humanity.
In other words, become something else. You can actually see a future where humans
have no need for written communication and maybe no need for arithmetic either,
(23:07):
because all of these things can be done by,
AI tools, exclusively by AI tools.
Yes, but at the same time, it's a different species. Or it's a species that actually regresses,
in a way, so it could be positive or negative, regresses compared to the current trajectory, right?
(23:28):
In terms of, so, you know, we talk about, you know, third world.
Third world typically has less access to technology.
And we are not so humble by saying that first world is more advanced.
Yes, we are more advanced technologically.
But, you know, what is what is
the best possible life or humanity for
(23:50):
a single human who is to say that it
is to actually compute arithmetics learn how
to read okay that is beautiful actually making music is
beautiful what if we all end up
being musicians musicians and comedians
and comedians so i i don't know i
have no idea because that's science fiction or there's
(24:14):
a scenario where in fact we are fed up that could
happen sooner than later right if we are
swamped by content that is auto-generated the
value of content will drop to zero in which case we
won't be bothered and then we'll go back to you know performance arts and spending
long lunches as we do in paris that's a cliche by the way but you know yeah
(24:39):
i'm trying to work out whether the future you're articulating is utopian or dystopian,
but it's certainly an intriguing one.
Many entrepreneurs I speak to seem to have picked up major, major learnings from their failures.
(24:59):
So can you share a significant failure you've experienced on your journey,
either probable or perhaps more likely prior to that, and how that shaped your
approach and your leadership style.
So I hinted at that earlier. I've been an entrepreneur in Paris for the past
25 years. I've created a number of companies.
(25:23):
I've sold them all. And this means that I failed every single time.
I failed to create a big tech. I failed to create a magnificent seven.
Because these are economic engines of growth.
Currently, they're located in the US. China is building that as well.
And Europe has not created that. France has not created that.
(25:46):
I have not created that. So my frustration for Europe is that.
In other words, how do we create in technology, because that is my field.
I think France has done okay in luxury and fashion.
But in terms of technology, because I believe in a future where technology is actually helpful,
caveat, not in a dystopian way,
(26:08):
but in a way where we control technology where
you know we are eyes wide open and as
I say AI is wide open in other
words referring to open source but so my
my frustration has been that in order
for France Europe in other
words not the US not China how do
(26:30):
we create a level playing field how do
we participate equally to deciding
how technology shapes humanity down the line so
currently europe does it through regulation that's the
one thing we export doesn't bring money doesn't bring money back to pay for
infrastructure and school and you know social welfare so my frustration is here
(26:56):
precisely so one of them is a company called snips snips was a voice technology
company company, Privacy by Design,
working equally well as Siri and Alexa and Google Voice Assistant,
although it was confined to a domain, linguistic domain.
(27:18):
We ended up having to sell it to Sonos, which is a great company,
the speakers company that does smart speakers, actually very smart speakers,
the real first smart speakers.
And now every Sonos device has the Snips technology built in.
So you'd say that's a success, but I'd call it a failure because we had to sell.
(27:41):
Deep Tech was not well supported at the time in France, So we struggled to raise
additional rounds of funding,
which would have been required for us to build up a company that would be actually
relevant, very relevant in 2022,
3, 4, with the advent of Gen-AI.
So that company we had to sell, we had to wrap it up into Sonos.
(28:03):
So that's a success for Sonos, certainly it's a success.
I'd even say that they got a good deal out of it.
It was you know 40 million dollars plus this company was worth well more so
how has that experience influenced the way you're thinking and behaving now
(28:25):
so you know going full circle,
i've come to realize that france and europe so if you look at paris paris is
actually doing okay 20 years ago when i started off as an entrepreneur i was alone and there was no,
significant support around.
It was basically a fetus of an ecosystem.
(28:47):
We didn't have structured venture capital, although we had some bankers who
started to create funds, but it was not the same as it is today.
Nowadays, we have everything.
We have incubators, accelerators, and seed, and A, and B, and C, investors.
We even have the beginning of an exit market. Not great, but...
So, the ecosystem in Paris is well structured.
(29:09):
The one in London is very advanced, and the one in Berlin is on par with Paris, slightly behind.
But all three are way behind what you have in Silicon Valley,
or even in New York, or even Tel Aviv.
Israel is the startup nation. It's a small country, but in terms of tech,
(29:33):
they've actually brilliantly implemented the Silicon Valley copy of venture
capital, raising funds and then exiting.
Actual billions, right? It's not paper unicorns, it's actual transactions in the billions.
So France, Berlin, London, Stockholm, collectively doing okay,
(29:56):
but still far behind the US.
So it might take a few more generations for us to generate the sort of self-fulfilling
prophecy that venture capital provides.
So that's happening, but it's slow and it takes generations.
Meanwhile, I have been on the entrepreneurial side.
(30:18):
So my journey is one where I take all of the risks and I'm now impatient to have impact, right?
I'm looking for impact at scale to the benefit of various layers of communities,
starting with the one here in France, then of course Europe,
(30:39):
then the rest of the world.
In other words, how do we create a level playing field?
And that should not be concentrated with a couple of magnificent seven companies.
They are magnificent for sure, they're otherworldly, but they distort the market.
And even the most liberals among us need to recognize that there is a sort of
(31:06):
oligopoly that is detrimental to free markets.
Free markets should reject actual monopolies or oligopolies or anything that looks like this.
So my my i have come to realize that the way to do impact is actually through open source.
Because open source is quite orthogonal to centralized value capture and in
(31:32):
a way it is no different than what we've seen over the past 70 years seven decades
where technology has been,
ping-ponging between centralized and decentralized server-side client-side thin client
And cloud computing and edge computing and centralized web and Web3, blockchain,
(31:57):
and open source is exactly the response to proprietary.
And so everything exists with the anti-thesis at the same time or in sequence.
And so I want to do my part to play the rebalancing act because I find that
(32:21):
things are extremely unbalanced right now.
Now you've touched on quite a diverse range of technologies and tech trends
shall we say when it comes specifically to AI what are some of the darker aspects
of AI that you think the industry needs to address more more openly,
(32:44):
and how it's probable contributing or potentially contributing to that conversation?
So there's a phrase from Arthur C.
Clarke where, you know, any sufficiently advanced technology is not undistinguishable from magic.
That is a beautiful phrase. I love it. But it is dangerous.
(33:08):
If you look at it and you replace magic with AI, because AI is quite magical
in some ways, then it is dangerous because magic...
Is deceitful, right? It's all about tricks.
It's all about deceiving humans. And so I'm worried that AI is deceiving by
(33:30):
either stealing content from people who've actually built their own business,
building content, right?
So if you look at, you know, journalists and newspapers, they've built content.
It is theirs. There's a distribution model.
And, you know, without any warning some companies just
go and scrape everything and say oh it was
(33:53):
available on the web so i scraped it should be should
be okay right so they've captured everything so they've actually deceived all
of the rights owners because they went too fast number one number two deceiving
humans because if you look at the interface the man-machine interface more and
more we'll be conversing with bots.
(34:15):
And there's a notion in technology that is called anthropomorphism,
which is when the machine pretends to be human-like, man-like.
And I think anthropomorphism is dangerous.
It is deceiving because humans will be easily manipulated that way.
(34:37):
And so So I prefer a robotic metallic voice than a warm fake or deep fake voice or impersonation.
That is deceiving. So I think all of these things add up. And AI is both the
danger of capturing too much knowledge and then turning us into ignorance.
(34:59):
That's one danger. And the other aspect from the human-machine interaction is
this notion of deception. exception.
So these are the things to look out for. And I think if every company has their
eyes wide open with that very concern.
If they expect AIs to be open, so these two concepts that I keep going back
(35:23):
to, which is awareness and control, then we'll be okay.
But that needs to be done at the very highest level.
CEOs of companies need to understand that AI is actually just another way to do software.
It's part of the toolbox for software. You build software using AI or traditional code or both.
(35:52):
And so there's a limit in terms of what tools you should be using.
Unless you actually tell your constituents, your employees, by the way,
this is a company that is not meant to create jobs.
This is a company that is meant to create value, capture it.
And the only stakeholder is the shareholder.
(36:17):
In other words, in 2024 with AI, we could ask the rhetorical question,
is it sufficient nowadays to have financial dividends alone?
And I leave the question open because there's no simple answer.
But if you do not start with that question that actually removes the human out
(36:40):
of the equation, because financial dividends are just that.
Then, of course, the next question is, what about societal dividends?
How do we re-inject humanity into a world where technology with AI is likely
to win it all, take it all, capture it all, remove it all from us?
(37:03):
So I think that is an important question for all businesses out there.
If you could solve one major global issue with AI, what would it be?
And how would you approach that?
So AI typically can assist us in accelerating discovery for certain large-scale
(37:26):
problems because we are overpopulated.
So that is something we have to deal with.
The planet is the planet. So by the way, the planet does not wait on us.
The planet does not care so much about us.
The planet will survive us when people say we have to save the planet.
No, we have to save humanity from itself.
(37:48):
So anything that touches plagues, anything that touches global warming,
predicting major cataclysmic events ahead of time, AI can probably help.
Finding new molecules, AI can help because we are a Darwinian species,
(38:10):
we evolve only so slowly, and our ability to process information is itself limited.
Even if we collectively assemble the brightest minds, we cannot process all
of the chaotic black swan events that are coming to us.
So every flap of a butterfly can be processed by AI.
(38:34):
We cannot process it as humans. And so I think AI should be focused,
or AI, at least a certain category of algorithmic.
Models need to be looking at
ways to help humanity solve problems that it cannot cope with on its own.
(38:57):
So we're looking at AI as a productivity tool.
That's fine, but I think there are limits in terms of acceptation.
You're familiar with Schumpeter, I suppose. Schumpeter, who was the advocate
of creative destruction, right?
So every new technology killed a number of jobs, but only to recreate new jobs.
(39:19):
That was true for the past century.
Is it still true now? I fear it is not true anymore,
because Schumpeter never lived to
see the world of exponentials so
therefore if jobs are created
(39:39):
too fast and new
jobs are not created fast enough
then the equation doesn't hold anymore right
this will just be disruption not creative and so
technology has this built-in exponential
nature between moore's law big data ai
(40:00):
so we should be looking at things that are hard
to solve problems that is where ai
can shine compute power of course behind it storage all
these things are extremely powerful and
perhaps should be focused on the very hardest problems that we face so you mentioned
(40:22):
earlier your failures in quotation marks your failures in terms of the businesses
you've led and the exits you've delivered.
So looking ahead, what's your ultimate vision for probable for success?
And also, how do you hope that you and your team will change the world or at
(40:46):
least have a major impact on the world?
Again, I have no merit, but scikit-learn has changed the world already, once and every day.
So scikit-learn is this technology that is foundational.
In other words, there is a before scikit-learn and there's an after.
(41:07):
And scikit-learn created a way of thinking that has shaped millions of data
centers and many more generations to come.
Our goal with Probable is to wrap a layer of additional value that we transfer
to the market and to data scientists.
(41:29):
And our ultimate goal is to create sustainability for the model itself.
So we're not looking for a quick flip.
This company is not for sale, right? So unless there is a magical alignment
with a company that actually allows to perpetuate the mission,
(41:51):
which, by the way, the mission that I talked about earlier is baked into our bylaws.
So we are actually quite deliberate in the way we're doing things.
And so this company needs to exist forever.
This company could IPO.
(42:12):
That would be a good way for people to actually buy
shares and be shareholders of this company so this
is a company that aims to deliver on
the mission forever okay that's fascinating in fact it's been a an incredibly
thought-provoking and at times quite philosophical discussion on the future
(42:37):
of technology and the future of humanity.
I've really enjoyed getting to know a bit more about you and your company and your technology.
And I think I'll check back in with you in another 12, 18 months to see how
you're progressing on your journey.
Until next time, keep exploring the frontiers of what's possible.
(43:00):
Will do. Thank you, Gary.