All Episodes

March 20, 2024 • 35 mins
Hercules builds high-quality, enterprise-grade AI solutions based on data extraction, transformation, and verification, while integrating with enterprise systems and tools for maintenance and monitoring. Join NFX general partner James Currier and HerculesAI CEO Alex Babin talk about the latest breakthroughs in the AI space of enterprise companies.
Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:03):
So welcome to the NFX podcast, and we'd like towelcome Alex Babin, the CEO of Hercules AI on
today.
Alex, thanks for joining us.
Thank you for having me, James.
It's a pleasure.
Yeah.
Well, you and I have known each other now for 7years.
We, we gave you a Pete seed check way back in2017.
You've been working on hercules since 2014,you're well ahead of the current boom in AI.

(00:30):
And so we thought it'd be great for ourlisteners to hear your thoughts about where I
is today, and learn more about Hercules, AI,and why you're doing what you're doing.
So, Alex, you just, quickly give us somebackground on you.
I'm Alex Babin, CEO and co founder of, HerculesAI.
I was born in Russia and moved to Bay Areaabout 12 years ago when my first, startup which

(00:52):
was also in AI space, but, interactive video,object image object recognition.
My co founder, and I, we leave, in Campbell andSan Jose, here in the area.
My co founder, Givork, is also into AI, buthe's been building before, joining 0.
He's been, and that hercules, he's beenbuilding, skin cancer image detection

(01:17):
algorithms.
He probably saved more lives with his work thanI ever met people in my life.
And then we joined forces to build, Hercules AIto actually save time Pete people can spend it
on the important stuff instead of, Pete tasks.
Got it.
And how many, how many people are on the teamright now?
So right now, about ninety Pete, spread acrossmultiple, countries, our headquarters here in

(01:43):
Campbell, Silicon Valley.
We have office in Canada, in New York, and,developing office in Armenia.
Fantastic.
So I guess, you know, you know, we met yearsyears ago and and and it's recently you've
changed the name of the company.
Could you quickly tell us sort of the basicsabout what you do and then why you changed the

(02:04):
name to Hercules.
Originally, a company called 0 systems.
Which is a quite good name for technologycompany, and it was standing for 0 time wasted
for Moge workers.
But recently, not just our product offeringexpanded, but also the tasks, we build AI
products for expanded dramatically.

(02:26):
And some of those tasks are so hard to automatethat at some point, we realized that the name
doesn't reflect what we're doing.
Herculean efforts, cleaning the audienceBeller, and things like this, that's what AI is
doing now.
And the reflection of the name change wasbasically because we're doing so many things

(02:47):
that before we're considered impossible, now ithas a broader meaning that Hercules can take
care of things.
Well, and and, you know, you you and I wereactually on a whiteboard, I think, when the the
name change came up, but, about a year and ahalf ago.
Is that right?
Yes.
And it was your idea.
You looked at our and you looked at our productand said, wait a second, guys.

(03:07):
You your platform called Hercules, Hercules AI,and you are not putting it in front of your
customers, the the name, which is great name todescribe what you're doing.
Why you are hiding the great name behind behindthe scenes, and we flipped it over.
So now Hercules AI is both the name of thecompany and brand and the product itself.

(03:28):
And I would tell you, James, it paid off nicelybecause people love it.
Absolutely.
Everyone loves Hercules.
Everyone is neutral about 0.
Because 0 is neutral.
Right?
Well, that's great.
I'm glad it's paying off.
It, it, it, it makes life more fun to have agreat name as well.
So let's let's turn it back to 20 14.
Well, before the latest AI moment that we'rehaving now.

(03:49):
So you're working on the idea of an AI modelthat could summarize long form texts into
concise, easily consumable, like, snippets,right?
Like saving people hours and boostingproductivity.
Where did you get that key insight?
And you know, who are your first customers?
Yeah.
So, actually, we were classic, approach of, thetechnology in the search of a problem because

(04:13):
the problem itself was very challenging.
And, again, it was pre LLM, pre transformerarchitecture, pre everything.
It was when AI was still called ML, and NLP,not sexy, right?
Not like right now, but we encountered theproblem when we realized that summer is a,
like, the consumption of the text informationtakes the, like, consumes the most of the time

(04:36):
from, the end users when they are processinginformation.
And it doesn't matter what industry it is.
You might be a lawyer, you might be a financialanalyst, you might be a reporter, it doesn't
matter, consuming text, understanding it,summarizing it, and extracting valuable insight
from it is essential part of every singleworkflow for knowledge workers.

(04:56):
And we decided to attack this problem.
And of course, it was pre LLM.
We started building our RNN, models, but tomake things even more challenging, we decided
to build all the models working on edge.
Like it was not enough to build, complexity tobuild, a model that would work for general type

(05:17):
of text.
You were trying to build it on the edge,meaning, so that it would run on my smartphone
Right.
Without having to go back to a server forsecurity reasons.
Actually, not.
Security came later.
It was a byproduct.
You know why we did it?
We didn't have money to run it in the cloud.
We were cheap.
We said, wait a second.
Smartphones are powerful enough, to processthose small models let's use the devices of our

(05:43):
end users to actually do the processing.
And then later it came back to us that wait asecond, but it's also secure, and it allowed us
to enter regulated industries where you cannotsend the data outside of the security
perimeter.
And that was probably one of the most importantthings that happened accidentally.
I would love to claim it saying, well, we weresmart enough to predict that in the future,

(06:07):
you'll need to run LLMs on agent, on edgedevices or inside security perimeter, but
unfortunately, not.
We were just trying to save money and buildsomething to run on the client infrastructure
of the hours.
Got it.
And so then you ended up with your firstcustomers being in these regulated industries
where security was really important.
And then you got the system to be more and moresecure, more and more robust.

(06:30):
Is that right?
Correct.
And once we made the first model work and itwas still pre LLM, pre transformers, it was
incredibly hard to do that.
But we managed to do it and it worked, and westarted looking for beachhead market because
beachhead market in enterprise is criticallyimportant.
I still do believe that it's impossible tobuild something generally available and

(06:52):
applicable everywhere.
At least not at the very beginning.
You have to focus, and you have to be findingthis, beachhead market that you can tap into.
For us, it became legal, large law firms.
And that's where you and I met James.
And I still remember pitching you, telling you,hey, we're building this automation, products
for, legal vertical for large law firms.

(07:13):
And you told me it was brave and stupid enoughat the same time because, actually, it's one of
the hardest market to Beller.
But at that moment, we already, got a couple ofclients and they loved the product.
And it still was incredibly hard to sell, butalso had some benefits.
And then later, of course, it expanded to muchbroader markets, but still inside regulated

(07:34):
industries, insurance, financial services, andso on and so forth.
Yeah.
You know, you know, the, the legal tech 3, thelegal customers have been really hard to sell
into for the last 20, 25 years.
They've been very slow to adopt newtechnologies.
That has changed in the last 3 or 4 years.
And I think, you know, we've got othercompanies in in in the NFX Guild, you know,

(07:54):
like Darrow and even up and and and whatnotthat have been doing great with with legal
tech.
And, you've, you were a beneficiary of that,but you very quickly got out of just selling to
legal folks, going to consulting companies and,and that kind of thing.
So, when, what would you tell yourself back in2014 or 17 when you were cranking away building

(08:19):
your own ML models and trying to get this sortof, basically, small version of James to work.
What what would you tell yourself today?
I would say one of the most important thingsthat we realized, later than I wish we did is
quality.
Quality of enterprise products.
It's not just related to legal.

(08:40):
Of course, legal, large law firms are verydemanding, but every enterprise product,
company is, is very demanding.
And the approach move fast and break thingsdoesn't work well.
So we entered the market really fast with ahalf baked product and started getting clients.
And I think we were absolutely lucky not tolose any of those clients because we

(09:04):
substituted the, the lack of quality of thefirst product.
It was like MVP, of course.
We substituted it with absolute obsession witha customer engagement quality.
So, and that allowed us to not lose anyclients, and we still have all the clients
since 2017, 2018, we still have them with us,and now they're buying more products and they

(09:26):
say that we are the best vendor they everworked with.
So I would say if I had to tell myself, backthen what to do and what not to do, I would say
slow down and spend more time on making theproduct better before trying to scale it.
Because we've been always under impression thatSilicon Valley startups, no matter what you do,

(09:50):
should scale incredibly fast.
But that comes back and bite you in, in theback, later.
And we only manage to survive that kind of a,tornado of global quality that was not still
there by being obsessed with clients.
And they gave us everything because of that.
Now when the products are on the next level,it's much easier to look back and say, Beller,

(10:15):
we should have pay more attention to theengineering debt and not spend too much time
building new feature and t features until wepolish the existing ones.
But, inside 2020, it's basically what I wouldsay to myself.
Got it.
Got it.
So let's talk about where you've evolved totoday with Hercules AI.
So now you're a platform that allows, you know,the assembly of, what, lots of different

(10:40):
applications in days or weeks, not months oryears, and that lets you expand into more
industries.
Where are you today?
And, who who are your customers and whatproblems are you solving for that?
So let's start with what we actually do.
So hercules' platform to assemble like anassembly line, right?

(11:00):
In manufacturing, but to assemble virtual AIworkers.
Let's unpack what virtual AI worker is, andthen we'll we can, like, it will be obvious who
we're selling it to and who is using it, who'sbenefiting it from it.
So virtually, a worker is basically a replicatwin of a particular specialist might be a
lawyer, but again, not doing the practice oflaw, doing a business of law, doing operations,

(11:24):
everything that people don't like doing, or itmight be financial analysts, analyzing
prospectuses might be, operator doing private,private equity capital calls processing and so
on and so forth.
So what we do believe in is that, humans doinghard jobs on minimal wage, while Roberts write

(11:45):
poetry and paint pictures is not the future weall want.
We want machines, AI, to take care of thosemundane and Pete tasks.
So people can actually focus on creativethings.
I know it goes a bit against the grain ofJanuary FYI as it is right now, which generates
pictures and music and everything and it'sfine.

(12:06):
For us, our mission is to build the AI poweredworkforce to actually enable people to spend
more time on doing productive things.
And it's been always from the very beginning ofthe company.
It's always been the James.
And if we look at the greatest companies, inrecent years who emerged, on top of existing

(12:26):
industries, for example, Uber, right?
It's the largest taxi company in the worldwithout having any cars on balance sheet.
We look at Airbnb, it's the biggest hotel chainin the world without having any property.
We wanna Beller, we wanna build the largestworkforce company in the world without having
anyone on payroll.
Because those are AI virtual workers, we givethem to our clients to do the work that,

(12:49):
typically people are doing.
And as Sam Alpin was saying, there will becompanies, bill into our companies that are run
by a couple of people in army of AI Rightrobots.
So that's exactly what we do.
But looking backwards, when we started, wewe've been building on top of something that is
very well hidden elephant in the room.
There's a $1,000,000,000,000 problem on themarket.

(13:11):
I would call it invisible problem.
Everyone knows about, but no one's talkingabout, talking about clunky old software.
And organizations run on software.
My BRP, my BCRM, might be anything.
There are 1000 and south thousands of, piecesof software in each organization.
These pieces are not connect connected.
People are jumping between them doing somemanual work and so on so forth.

(13:32):
So we build virtual workers to actuallyinterconnect that software, but not just
software, people as Beller, differentdepartments.
And that's exactly what virtual workers are.
We started them building vertically 1 by 1,started in legal automating compliance, time
capturing, quote to cash invoices, those thingsthat actually consume a lot of time.

(13:54):
And then went into enterprise, broader pricelike insurance, financial services, and others.
But we stayed inside the, boundaries of,regulated industries.
Got it.
And, aren't it feels like there's a ton ofcompanies doing this now?
How do you stand out?
Of course.
So there's so many companies that, want to makesure that people spend time on what's really

(14:19):
important.
They do it.
They're all addressing it differently.
Let's, imagine a spectrum, right, like the theline.
On one side, that would be RPA with a verysimple processes to automate.
Even if you add AI to it, it's still going tobe AI powered, RPA.
Robotic process automation.
But on other side, there might be giganticprojects being taken care by, C3AI or Palantir,

(14:46):
which are amazing companies, but their projectsare gigantic.
What we do we build, we've built Hercules as aplatform as assembly line.
You can take components that already exist, andworking and, and, test it with many customers
and assemble the application, the virtualworker with our help, of course, assemble it

(15:07):
pretty quickly and put it into works, toautomate those very complex workflows.
So we're in a basically sweet spot ofintersection of very complex workflows
automated by, AI virtual workers, which can bedelivered really, really quickly.
We're talking about Wix Not even months.
And so are you going into any particular typesof companies, high security, mid security,

(15:29):
large companies, medium sized companies, yougoing into companies with, like, a Snowflake
implementation or Okta?
Yeah.
We focus on the large enterprises.
Some of our customers are fortune 1000.
Actually, most of them bigger enterprises.
They all care about security.
And also they are in, regulated industries.
Again, financial services, insurance, andlegal.

(15:50):
That means that we need to support all of theinfrastructure they already have.
And you can't build something like this tosupport, all of those, underlying
infrastructure overnight.
That what took us 7 years to build.
Hercules has 200 components.
For example, if a client using Snowflake andOkta we can support it.

(16:10):
And the virtual, worker is gonna be deliveredexactly, that that exactly feeds the
enterprise, infrastructure.
This is critically Morgan.
And another important thing for enterprises iscost of ownership.
If they already have systems in place, Theydon't want to change those or rip and replace.

(16:31):
It's important for the new technology, newproducts to work on top of what is exist
already.
And so what's the typical process?
When you come into a company, you say, look,we're gonna give you additional workers we're
gonna charge you a SaaS fee.
How how do you charge folks?
We charge $300,000 per AI worker per year.

(16:51):
It says, though unlike, many other AIcompanies, we are not hosting those, AI systems
on our side.
Remember, they are running on infrastructure ofour clients.
So our margins are incredibly high.
If you compare with, other, companies where youneed to spend enormous amount of money on GPUs
and infrastructure.

(17:12):
We don't have those costs.
And also for clients, it gives predictabilityof how much it will cost.
We don't have variable cost and clients have itinside the security perimeter.
So this is this is one of the most importantthings for clients right now because we've
seen, clients prototyping themselves, usingavailable tools on the Morgan, and they see

(17:34):
that the cost running things, even on the sureit's astronomical.
And sometimes they are saying, wait a second.
I'll I'd better wait until it becomes cheaper,better and faster.
Though it promises enormous benefits it's stillfar away from enterprise grade product.
And I'll wait, and it's too expensive to run.
So I'll wait.
But if they have an alternative, like, with us,they just can grab it, install it, and have it

(17:57):
up and running in, in a matter of weeks.
And then they know what exactly what theircosts are gonna be.
Exactly.
It's very predictable.
But there are there are other things that arecritical Morgan.
I've seen company startups that are jumping inand trying to build enterprise solution.
Without thinking how much it will cost for aclient.
So for example, we are using hybrid approach,GPU Pete.

(18:19):
You can, virtual worker can use GPUs forspecific tasks, but then CPUs kicks in and you
can, you you can run smaller models on CPUsmaking it very, very inexpensive, basically
using utilizing the existing infrastructure.
So for clients, it's critical to know how muchit will cost on at scale.

(18:39):
Kinda and so you're using, one particular LLMMorgan you're using multiple?
We have about 9 right now.
Some of them are smaller and those are hoursand course, we use the best, the, the, the best
in class like LAMA and Nystrel, because we wedon't wanna train our own models.
We fine tune them.
We make them specialized.

(19:00):
For example, we, just released the new modelcalled Rosetta Stone.
Why it's called Rosetta Stone, because it doessomething that no other model in the world is
doing.
It's structured data transformation.
It's like ETL, but without rules.
It takes any structured data and turns it intoanother type of any structured data.
It's just 7,000,000,000 parameters, but itbeats GPT 4.

(19:22):
By 30%.
Got it.
So this Rosetta Stone comes with your productwhen somebody becomes a client of Zeus.
It's something you guys develop.
Because you've developed your own ML models,which you're still using.
Yes.
And now you're actually adding some of theother models and and Rosetta Stone is just
another example of another one that you builtinternally or
You're right.
And there is also interesting thing, aboutquality of AI products, and it also, gets back

(19:46):
to, what models are being used and not justwhat models.
How they're being used.
So we all know about hallucinations.
Right?
It's, it's a pretty big problem, and morecomplex the workflow you want to apply LLM 2,
more complex it is, more hallucinations youwill have.
And of course, there are techniques to battleit But actually, the best way we found is

(20:08):
neurosymbolic AI.
It's where LLM provides some output might beextraction, extract something from a document
or, database or whatever.
And then another LLM turns it into rules.
But the rules are 100% accurate and thendelivers those rules to another Beller.
And that's how you avoid error propagation whenyou build very complex systems.

(20:33):
For example, Samawari, I, virtually a workers,they have 4 or 5 James working as Ensemble, one
after another.
And in order to remove this error propagation,you have to have neurosymbolic AI, and they had
to build models that actually do that.
It was pretty labor intensive, but now we canchain any models together and make sure that

(20:56):
they are provide absolutely highest possiblequality.
Got it.
So this goes back to your discussion of howyou've built out the infrastructure and the
processes to really take care of the customersso that it's easy for them to implement without
hallucinations.
You you didn't actually come up with llama orwith with, GPT 4, but you figured out how to

(21:18):
structure them and implement them inside theselarge organizations.
Yes.
We believe James are they already commodity,and they're gonna be gonna be cheaper, faster,
and better.
And it's normal.
For example, right now, James, you don't carewhere electricity is coming, when you're making
coffee in the morning in your coffee machine.

(21:39):
You care about the quality of your coffee.
And that's exactly the same gonna be happeningwith, LLMs.
They're gonna be
So you're so you're predicting that the costand the stickiness of a GPT 4 or 5 or 6
compared to an anthropic compared to a llama isgonna go to
I would say it's gonna be so easy to switchbetween them if you have the infrastructure in

(22:01):
place.
So for example, I'll give you another example.
Right now, let's say we deploy, we give ourcustomer, AI application, a virtual worker that
has, like, 5 models inside, and one of themodels is Lambda 2.
Next year or this year, LAMA 3, being released,for example.
How do you change?

(22:21):
How choose swapped models?
Without rebuilding the whole application fromscratch.
We learned that hard way 5 years ago beforeLLMs because we needed to update models.
So we have a module, which could be a separatestartup on its own, that does model swapping.
And for example, reinforcement learning datastored in a separate database.

(22:43):
And when you replace the model, automatically,the reinforcement learning data being set up on
top, and it the model Flint being fine tunedautomatically without without us even touching
it.
So this is an example how infrastructure willactually drive, the way, models are gonna be
used.

(23:04):
So if it's so easy and painless to swap, LAMA 2and put instead of an anthropic, Why not?
It's about better, faster, and cheaper.
Got it.
Got it.
So all the people who invested in these bigmodels are gonna lose all their money.
I don't think so.
Like, electricity generating companies are notthey will turn into PG And E, right?

(23:25):
And, as much as you don't like PG And E, but Ithink they're gonna be important part of the
infrastructure underlying layer because youstill need models.
Right?
But they're gonna be, they they gonna be comeup.
So you're charging 300,000 per virtual worker.
Can I have multiple virtual workers for thatsame 300, or do I have to pay 300 for every

(23:47):
individual?
One virtual worker can do the work of athousands Pete, thousands of people.
Right?
So or, automate processes for thousands some,thousands of Pete.
We separate those, virtual workers by type oftasks or specialization they focus on.
For example, you might have one virtual workerin, financial department who does quote to cash

(24:11):
process, complete automation, but you can alsohave someone in legal department who is doing a
payer Currier that does contract analysis andextraction from contracts and then passing that
information to another virtual worker.
That's the beauty.
They actually talk to each other.
We'll also build the system where virtualworkers not isolated.

(24:32):
They talk to each other passing information toeach other.
But in this case, you'll have 2 workers andyou'll pay 600 through how to teach.
And we see clients who started with 1 and goingbasically, full scale with 3, 4, 5, and now
putting much more, 1000000 of dollars inbudgets.
So for 300 k, I'm not buying a virtual worker.

(24:54):
I'm buying a type of virtual Currier, and thenwe can let that scale up as much as I want.
And then they can all inter intercommunicate.
So so the within all these, so all this data isbeing passed between these models now.
I mean, data and security, and AI are such ahot topic.
So why why is this integration of AI intoenterprise business processes so scary for

(25:17):
those enterprise?
Number 1, because of the quality.
Right?
We all heard stories about, some engineerbasically hacking, from hacking, chatbot for
the, dealership company and buying, ChevroletTahoe for $1, right?

(25:38):
So And things like this are inevitable when youhave very generalized approach.
Basically, you give LLM to do whatever,whatever, you want it to do.
But if you have a very specialized approach,the risk of having low quality is minimal.
Or reduce to minimal.
So one thing that enterprises all jump into,the use case is, of course, generated AI, let

(26:04):
let's make it generate text.
Let's do marketing emails, blogs, whatever.
And that's great, but it's not, bedrock ofthose processes that those organizations
organizations are running on.
So if you are fortune 1000 and you're usinggenerative AI to, to generate marketing emails,
well, it's still great improvements, but it'smarginal.

(26:25):
But if you are using generative AI solution toanalyze if your financial institution, for
example, analyze the fraudulent transactionsand flag those and prevent those, this is where
real ROI comes from.
We've got data integrity issues.
We've got compromised confidentiality.
You've you've got just a whole number ofdifferent types of security issues with

(26:49):
implementing AI, and these organizations arejust beginning to understand the implications
of it.
Is that right?
Currier.
And also startups who are trying to sell rightnow into enterprises, they will they still,
will still have to have all those components tomake sure it's, enterprise grade solution
they're delivering.

(27:09):
Given that you've been doing this for 7 years,you've kind of worked through all of these
these challenges.
You've you've encountered them 4 or 5 yearsago, found solutions for them.
And now just part of the infrastructure thatyou're assigned to people as part of the
platform.
And we have to eat our own dog food before webuild the platform.
We have to build components and solutionsourselves to make sure that we deliver the

(27:30):
highest possible quality.
And then we started turning those componentsinto a platform.
But we, like, looking backwards, if we startedto build platform from scratch, we would not
even know what to build, into it.
What clients would need.
We have to walk that that path to actuallyunderstand it and realize it.
Right.
It's interesting.
It's almost like, you started with a killerapp, And then you built a platform on top of

(27:53):
that, and that killer app was just basicallyreading long emails and summarizing them
because that's what every knowledge workerdoes.
And then and then you've expanded from there.
Okay.
And so 5 to 10 years from now, what's what'sthis whole AI market look like?
And we've got Microsoft and Google, they wannasupply all this stuff to the enterprise.
You're supplying stuff.
Okta is saying they're gonna apply, you know,add AI into their existing product.

(28:16):
What what is how does the market evolve?
I mean, all the listeners here are gonna wonderwhat your prognostication is.
I'm a metaphor guy.
I like metaphors because they explain things,visually.
So I think of it as a as a planet.
And the planet will have 2 poles.
One Pete would be open source models, andanother one would be closed source models.

(28:40):
Both will exist, and they will have thebenefits, and, cons and pros but there will be
continents of, I would say, two main pieceswhere AI first companies building something
that Beller been done before.
Because now they have ability to address usecases that without generative AI, were not

(29:02):
possible to address.
And that will be also AI enabled companies.
Like traditional software, take Salesforce,they're investing a lot of into AI and doing
it, for a long time.
Traditional businesses, traditional softwarethat is AI enabled.
And those 2 worlds would coexist.
It doesn't matter that one will win overanother.

(29:23):
It's just different use cases they're gonna beaddressing.
And James as we know it, like, providers ofJames would be, again, commodity.
So we will not even think what model we areusing, inside the product.
It the output, that we are using.
And right now, most of the money goes to,hardware providers because of requirements for

(29:48):
GPUs and everything, but it it will alsochange.
Models become Beller.
They make, less they're less demanding for the,for GPUs.
We use hybrid approach in our case, it's 10%of, usage of GPU and 90% of CPU and and the
likes.
So I would say everything is gonna becommoditized.

(30:09):
But the platform platforms that will be able todeliver the ROI today end users or end
customers gonna be actually winning.
Right now, it's opposite because it's very, Iwould say it's very immature market.
Your prediction is that you guys are gonna be aplatform, is there gonna be another platform,

(30:31):
or will it just be Hercules?
Of course, there's gonna be other platforms.
It's obvious.
It's a great, great way of, building somethingthat never existed before.
Because in this case, when you have a platform,your clients are telling you what they what
they need.
You don't have to go and kind of arrange them.
They will try find the use cases.
They are coming.

(30:51):
They're starting building things, and you justprovide them with, ability to do that.
That's where scalability comes from.
So I would say it's absolutely obvious that alot of companies will go that way, but what we
Pete, even if they start now, it will take themtime to build those components.
Because it's so easy to prototype right now.

(31:13):
You take a couple of models slap, blank chainon top, do rag, and it's still gonna be a
prototype.
Clients don't want prototypes.
I use the analogy all the time.
We are at the dawn of aviation of AI.
Right?
It's a paper plane made out of dry, wood, dark,tape, and shit, and it's in the air.
It's flying three hundred yards.
Amazing achievement.

(31:34):
We were walking on the ground.
Now we're in the air.
But enterprise clients need Boeing 777.
With all the infrastructure and a cocktailserved in the 1st class.
That's what they need.
And we are still, as an industry, pretty faraway from it.
Got it.
Got it.
Got it.
So what is going on, with, let's say, Googleand, OpenAI in that battle?

(31:59):
What they're they're what they're building nowin terms of these James are gonna be subsumed
to the platforms, but OpenAI has shown thatthey wanna build some sort of a marketplace, at
least, that they're thinking about how theymove up into that sort of platform area by
letting other people build up locations.
Is that something you're gonna see them doing,or it's just not in their DNA?

(32:20):
Well, they might, and it's pretty obvious idea,and it's great idea.
They definitely will be going this direction,but I don't think they're gonna be going into
hardcore enterprise because, again, that willrequire them building components, not just
providing LLMs, or they will require developersto build those components.
And it's still gonna be hard.

(32:40):
I would bet on Microsoft because Microsoft hasa lot of those components already built.
But regarding Google and OpenAI kind of a panelthat we see right now.
I think Google has a huge advantage.
They have data.
They have enormous amount of data, YouTube.
They have Gmail.
They have, Google Docs.
The problem with Google, as I see it, it's nottechnology.

(33:02):
They have smartest people in the world, andthey have enormous amount of money.
It's culture.
Culture of innovation, and we know that cultureeats strategy for breakfast, where Google is
navy, Open AIs, like, they are pirates.
And I would bet always bet on pirates becausethey move faster.

(33:23):
In this case, well, we'll see how it unfoldsfor Google, but so far it's so so interesting
to see that battle unfolding from the sidelinesand saying whatever who whoever whoever wins,
we will be using whatever whatever is the bestfor our clients.
5 years from now, What are we gonna be lookingback and and kicking ourselves from not
realizing?

(33:44):
That's a great question.
I would say we will probably be kickingourselves for not realizing how, how still hard
enterprise market is.
And right now, we're kind of betting on, like,AI will change everything.
The same way we were saying cloud will changeeverything.
And the way before that, we were saying mobilewill change everything.

(34:06):
And internet before that.
And it did, but we are underestimating how muchtime and efforts it will take.
So whenever we think like, okay, I have a greattechnology and a, in a product, and all those
possibilities, I'll win the market in in 1year.
No way.
It's still gonna be long and, and expensive.

(34:28):
Okay.
Well, Alex, this has been fantastic to talkwith you.
I, I think the fact that you've been at thisnow for for so long and, are so wise about how
enterprise actually works and actually bringingAI to enterprise.
You're like, to the leaders in in doing that.
It's just a real pleasure to hear yourperspective on what's what's, been happening,
what's gonna happen next.
Thank you so much.

(34:49):
Thank you, James.
This was great pleasure.
Advertise With Us

Popular Podcasts

United States of Kennedy
Stuff You Should Know

Stuff You Should Know

If you've ever wanted to know about champagne, satanism, the Stonewall Uprising, chaos theory, LSD, El Nino, true crime and Rosa Parks, then look no further. Josh and Chuck have you covered.

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.