All Episodes

October 7, 2025 • 45 mins
"In the context of where Confluent can play a critical part, it's also the interoperable integration with all the respective AI ecosystems. If you think about what AI is doing, it's working across microservices, working across data lakehouses, databases - could be a different endpoint service. Bringing all that together in a secure and consistent manner, constantly serving that information, is where I think it plays the most pivotal role." - Kamal Brar

Fresh out of the studio, Kamal Brar, Senior Vice President of Worldwide ISV and Asia Pacific/Middle East at Confluent, joins us to explore how data streaming platforms are becoming the critical foundation for enterprise AI across the regions. He shares his career journey from Oracle to Confluent, reflecting on his passion for open source technologies and how the LAMP stack era shaped his understanding of real-time data challenges. Kamal explains Confluent's evolution from the category creator of Kafka to a comprehensive data streaming platform combining Kafka, Flink, and Iceberg, emphasizing how real-time data infrastructure enables businesses to harness both public AI models and proprietary enterprise data while maintaining governance and security. He highlights compelling customer stories from India's National Payments Corporation processing billions of UPI transactions daily to healthcare AI applications serving patient needs, showcasing how data streaming solves fragmentation challenges that plague 89% of enterprises attempting AI adoption. Addressing implementation hurdles, he stresses that data infrastructure is the most critical piece for AI success, advocating for standards-based interoperability through Kafka's protocol and Confluent's extensive connector ecosystem to unlock siloed legacy systems. Closing the conversation, Kamal shares his vision for Asia Pacific becoming Confluent's largest growth region, powered by massive-scale innovations in payments, mobile transformation, and AI on the edge for autonomous vehicles and next-generation interfaces.

Episode Highlights:
[00:00] Quote of the Day by Kamal Brar
[01:00] Kamal's Career journey from computing to open source
[04:00] Attraction to data streaming and Kafka ecosystem
[07:00] Confluent's mission: data streaming platform leadership
[10:00] Why data streaming is critical for AI
[13:00] Report findings: 89% eager to adopt DSP
[14:00] Data fragmentation remains biggest enterprise challenge
[17:00] Real-time visibility becomes competitive differentiator
[20:00] AI-enabled applications transforming enterprise stack
[24:00] India payments: Kafka powers UPI infrastructure
[27:00] Data governance and security in AI
[33:00] Data infrastructure: foundation for scalable AI
[35:00] Connectors enable seamless system interoperability
[38:00] Interoperability unlocks fragmented enterprise data
[39:00] Asia Pacific driving aggressive regional growth
[42:00] What does great look like for Confluent
[44:00] Closing

Profile: Kamal Brar, Senior Vice President WW ISV [Independent Software Vendor] & Asia Pacific/Middle East, Confluent https://www.confluent.io

https://www.linkedin.com/in/kamalbrar

Podcast Information: Bernard Leong hosts and produces the show. The proper credits for the intro and end music are "Energetic Sports Drive." G. Thomas Craig mixed and edited the episode in both video and audio format.

Here are the links to watch or listen to our podcast:

Analyse Asia Main Site: https://analyse.asia

Analyse Asia Spotify: https://open.spotify.com/show/1kkRwzRZa4JCICr2vm0vGl

Analyse Asia Apple Podcasts: https://podcasts.apple.com/us/podcast/analyse-asia-with-bernard-leong/id914868245

Analyse Asia LinkedIn: https://www.linkedin.com/company/analyse-asia/

Analyse Asia X (formerly known as Twitter): https://twitter.com/analyseasia

Sign Up for Our This Week in Asia Newsletter: https://www.analyse.asia/#/portal/signup

Subscribe Newsletter on LinkedIn https://www.linkedin.com/build-relation/newsletter-follow?entityUrn=7149559878934540288

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:00):
In the context of where a confluent can play a critical
part, it's also the interoperable integration with
all the respective AI ecosystem.Now, if you think about what AI
is doing, it's working across micro services, working across
lake houses, databases could be a different endpoint service.
To bring all that together and to do it in a secure manner and
consistent manner, constantly serving that information is

(00:22):
where I think you know it plays the most privileged role.
Welcome to Analyse Asia, the premier podcast dedicated to
dissecting the pulse of business, technology and media
in Asia. I'm Bernard Leo and the ability
to harness data from real life streaming has become a strategic
necessity for generative AI. So with me today is Karma Bra,

(00:44):
Senior Vice President of Worldwide independence software
service vendor, because ISV and Asia Pacific and Middle East are
confluent to help me to unpack the whole the emerging role of
data streaming platforms in the world of AI and also how it's
going to shape the future of Asia Pacific.
So come on, welcome to the show.Thank you, Binan.
Thank you for having me. Yeah, I always want to hear

(01:07):
origin stories. And how do you start your
career? Yeah, it's a, it's an
interesting 1. So, you know, I'm sure some
folks are kind of still thinkingthrough what they want to do in
school and kind of deciding whatwhat they want to actually end
up doing. For me, it was pretty clear.
I actually really had an interest, strong interest in
electronics and computing at a very young age, I would say.

(01:29):
So for me, it was kind of like Iwould want to pursue a career in
computing. And back then computing was kind
of taking off like in the late 90s, we didn't have, it was a
lot of the curriculum in terms of universities was around
software development. It wasn't really around mobile
and all the other amazing technologies that came later,

(01:49):
but it was around fundamentally software engineering and, and,
and databases, right? So those are the 2 core areas.
And so I ended up taking a batchof computing, which turned into
a batch of computing science when it was rebranded.
And so I was very much focused on, hey, I want to learn all
about software programming. You know, back then it was
Pearl, Apache, MySQL, all that type of good stuff.

(02:11):
LAMP. LAMP.
Lamp. Yeah, if you recall.
So I was, I was definitely into that.
And so I did that as kind of like my foundational educational
studies. But then I kind of was very
fortunate, you know, as the, as I wrapped up my tertia
education, I got an opportunity to work for Oracle.
And that was my, even prior to Oracle, I was working in an

(02:33):
Optus, which was a, you know, obviously a telco in Australia.
And then I kind of shifted to Oracle.
And Oracle was an amazing company back then.
Anyway, it was like still is, still is, still is.
But back then it was 25,000 employees.
It was a very different company to what it is today.
And you know, I'm sure if you'retracking Oracle, it's making a
lot of interesting investments on AI.

(02:56):
And so for me, it was just a great learning playground.
Like I got to learn a lot about Oracle technologies, Cortec and
then, you know, evolve from Cortec to the applications,
understood about, you know, Oracle applications.
And then from there, I actually decided to take a leap of faith
and having spent 6-7 years an Oracle, I kind of wanted to
experience something different. So, you know, having worked in,

(03:18):
as I say, the Oracle machinery where you learn a lot and you
learn about really some interesting technologies.
It was a number one database company and technology company,
you'd argue at that period of time, but there was so much
disruption kind of, I would say on the onset.
And so I, I got to learn from working in some of the most
exciting open source technology companies.
So, you know, it's kind of interesting.

(03:38):
My career's been a mix of data, but also, you know, I've worked
in four open source companies, so it's, it's been amazing.
So one thing, one curious fact, because pre this conversation, I
took a look at your background, you work for Oracle Horton works
and then now Confluent. What drew you to say to data
streaming your present role and what what do you think about in

(03:59):
all these open source technologies that gives you this
common line threat in your career?
Yeah. So remember I referred to the
Lamb stack. I actually worked for my SQL for
five and 5 1/2 years and that's where I got the bug, right.
And literally, you know, MySQL was a very small company, you
know, I think with 350 odd employees.
When we got acquired by Oracle, that was my second stint Oracle.

(04:20):
But it was just a phenomenal cult following.
Like, you know, back then when Facebook was coming together, it
was all powered by MySQL and thedatabase.
And you think about it right now, it's like a given.
But back then, you know, when Facebook was building its
architecture for scale, for mobile and web, there was no
database of choice. And so I think that was kind of

(04:40):
the era where we'll literally in, I would say a very
fundamental part of the transition on the web.
And, you know, mobile hadn't taken off, but what was truly
there, social media was truly taking off.
And so I think what attracted meto the Conference story was, you
know, I always knew Conference was a great company, a great
brand, done tremendously well. It kind of created the category

(05:00):
of Kafka and data streaming. So for me, it was like going
back to the core. I always wanted to come back to
open source. And you know, I'd worked in
other technology companies and you know, and so it was more for
me to go back to something I enjoyed.
So I'd been in MySQL, I'd been in Hortonworks, you know, as
you, as you talked about. So it's just coming back to open
source? There's always open source

(05:21):
enterprise tech. So now you reflect on your
career journey, right? What are the kind of lessons you
will share with my audience? Yeah, I, I think the fundamental
lesson I would say is, you know,there are no limits in what you
can achieve in, in any role or in particular any, any company
you go work for or be part of. You know, every company has very

(05:41):
unique elements of, of learning.And so, you know, there's
different growth stages in the company, there are different
technical challenges. You know, there are different go
to market challenges. And so when you think about all
the things that have to go rightto make a company so successful,
it's kind of amazing, right? I mean, you see very few
companies cross the chasm of, you know, getting to a billion
dollar run rate in open source. You know, my skill were acquired

(06:04):
quite early. You know, Hortonworks within,
you know, we got acquired as well as as part of Cloudera.
So most of the companies don't get to that billion dollar run
rate. I mean, obviously Red Hat was
one of them and did really well and continues to do well.
And I think you know. Snowflake data bricks as well.
Yeah, snowflakes, data bricks and, and I think these
companies, you think about it, all these technologies were open

(06:25):
source, right? So if you look at what, what
data breaks has come from with Spark, you know, again, very
much open source. So that's, that's pretty
exciting, right? And so I think open source has
become and and definitely the foundational projects in Apache
and the community have been really pivotal in defining the
tech landscape. Do you think the community is

(06:46):
also part of the core of what makes open source software such
a interesting part of your wholecourage journey?
Yeah, I actually hidden secret. I used to used to be a
contributor a long, long time ago.
I don't write code anymore. But I think the community angle
is absolutely pivotal. You know, without the community,
it's very difficult to get the adoption.

(07:07):
And so adoption defines kind of like the opportunity for us to
to do some of these more disruptive technologies.
Otherwise you'll be stuck in themost obvious, it's safer choice,
right? And so I think the community
enables us to do that. Let's get to the main subject of
the day because we're going to talk about the data streaming
and AI readiness in the Asia Pacific and also be the East we

(07:28):
should cover in the region as well.
So maybe just to baseline to help my audience to understand
Confluent better, because not everyone knows about all these
enterprise tech companies. What's Confluent's mission and
how is it fit into, say, enterprise AI and defer data
infrastructure landscape? Yeah.
So maybe just worthwhile for those of you who your viewers

(07:48):
may not be familiar with confidence, a company, it
started over 10 years ago and the project was around Kafka.
So Kafka today's a very de factopeople refer to Kafka as a
pretty standard, but that's whenit was defined.
So, you know, our three founders, June, Anea and Jay,
who's our current CEO and founding Co founder kind of out
of LinkedIn had this project. And so out of this project,

(08:09):
they, they ended up spinning outa company.
But the, the challenge, of course, and if you're familiar
with LinkedIn is, you know, I only want to subscribe to
information that's relevant to me and to be able to consume
that information officially and,and just imagine the total
volume of people who are on thatplatform.
And then to, to do that efficiently and to be able to
publish that data continuously as a feed that's relevant

(08:30):
without having to keep, you know, making a call.
Like in, in the world of database, we used to be like,
you know, inquiry and something and you get a response, right.
So it's kind of like a, you know, a request based access
where this was continuous feed of information and real time.
And so that's kind of how the company formed and Kafka became
the most popular, one of the most popular Apache projects

(08:51):
and, and is still today a growing Apache project.
And so the, the company over thelast 10 years is kind of pivoted
from becoming the de facto category creator in Kafka in
data streaming to becoming the DSP company.
And so, you know, for those of you who are thinking about what
DSP is, you know, fundamentally,you know, you have real time
events streaming, we know that'snumber one with Conflan and

(09:14):
Kafka. And then in addition to that,
you need to be able to kind of process these streams and so be
able to to manipulate or transform these streams.
And that's where Flink became the standard.
So you know, three years ago, about 3 years ago, I believe
Confluent acquired a company called Emrock, yes.
And so that's where we essentially bought the founders
of Flink into into our ecosystem.

(09:37):
And so Flink and and Kafka became one until today.
That plays a very important rolein the context of AI and I'll
explain a bit later. And then more recently what
we've done is introduced Iceberg.
So Iceberg becoming kind of likethe de facto table store format
or essentially the improperability for lake house
where you can kind of hopefully be able to share information

(09:58):
between the data stores or the data database of data lake
houses and also, you know, the platform itself.
So I also have a stint when I was hitting AI and machine
learning in AWS for Southeast Asia, where one of the key
things is usually in data, data streaming, specifically F1 for
example, where you need a lot ofevents.

(10:21):
And the the the key part of it, even for basic video analytics
is that they a lot of people always talk about Kafka Streams
and actually using the fling as part of the injection engine so
that they can get that information on it.
Maybe can you that's. A great use case probably.
Yeah, that's a great use case, right?
But can you explain like why data streaming is so important
to AI now? I think I can understand it

(10:42):
because I use ChatGPT every day.And then essentially there's so
much conversation streams going on in my chat window, but I I
think it's only just one stream.But if I start thinking about
their 500 million users suddenlyall using it the same thing,
there's a very different conversation.
Yeah, that's very true. So you know, where's the context

(11:02):
of data streaming platform and and obviously AI.
So I think there are multiple facets to it.
The most obvious is you want to be able to do inference, right.
And so when you talk about your querying or asking questions in
chat GPD or the prompts, as you call it, prompt engineering, you
know, you'd be able to understand the context of what
you're asking for. And so in that scenario,

(11:23):
majority of the technology or majority of the kind of like I
would say, the models in some shape or form want to be able to
store feedback loops and to understand, hey, what, what, how
do we make a decision? How do we can get that data?
Where was that data? And how frequently was this
accessed or even things such as the metrics associated with that
particular question so that theycan learn and train and be more

(11:45):
efficient at the next particularask, right?
So you know, you may have, you know, you and I asking the same
question in a different way. So it kind of leads to learn and
be more efficient and how it canprovide that data.
So the inference part is super, super important.
So being able to do that efficiently, being able to, you
know, do that across multiple data sources, I think is is is

(12:05):
very logical. And that's where, you know,
Flink comes into play. So that's that's a very
important part. The other part link to that is
obviously the model feedback loop, right?
So to enhance the models, you want to be able to store that
data for a long, long period of time and or at least be able to
store it to understand how the models can improve.
The most obvious is, you know, as these models, you know, look

(12:26):
at not just web data. And, and if you think of the
enterprise context, the, the models are generative AI.
So they understand the web, theyunderstand the machine learning
training that's already happenedon, on those models, like UC,
ChatGPT or any of the models. They'll tell you when it's
trained tool or, or you know, news, shit's a little bit
behind, right? But in the context of
enterprise, they know nothing about your business, nothing.

(12:47):
And, and for good reasons, right, That you that, that data
is very much specific to your business requirements and it's
probably a competitive differentiator.
And so that data cannot be publicly shared.
And so how do you bring the power, the harness the power of
the public or I would say generative AI models and the
enterprise models. And that's where you know, with,

(13:07):
with obviously with Coughlin, Kafka and Flank, we want to be
able to bring the inference. So you'll be able to essentially
bring together and query these different data sources and
endpoints, could be an endpoint that's external, right?
And then bring that data together very quickly and
efficiently and provide that, that kind of like I would say
the, the combined knowledge of both your enterprise data and

(13:29):
also the generative data and serve that to you.
OK, save that to your user. Yeah, that that's interesting
because I think a lot of people do not appreciate that actually
a basic data stream is such quite important for production
level workloads. I think we're not talking just
like in the ChatGPT use case, right?
When you think about even like say a customer service chat
board where it's servicing say million queries, I think that's

(13:51):
when the strength of the data streaming provided by say Flint
or Kafka, it becomes very, very essential for the customer.
Yeah. So 11, interesting question I
wanted to get to is there was a recent report on data streaming.
I think it's published by Confluent and can you talk?
I actually get a chance to read it.
So can you just talk about some of the key takeaways from that
report itself? Yeah.

(14:12):
I, I, I mean, obviously I encourage everyone to go look at
the report, but the the three areas, I think the willingness
to drive or leverage AI based data technologies is very, very
high. Of course there are fundamental
challenges with it. I think 8990% of our kind of
respondents said they're very eager to use the DSP
technologies and kind of leverage this for AI.

(14:33):
But they also cited that the fragment the fragmented.
Data. Source is one of the big issue,
right? Yeah.
And so that's that's been AI would say a just a tech debt,
right. So a lot of the systems that you
have in play today, if you thinkabout the nature of these
systems, you look at a bank, youlook at anything that's
significantly, I would say largein terms of complexity and in
size, you'll realize that it's actually a lot of work in in

(14:56):
just kind of turning, you can't turn those systems off.
There are a lot of legacy technologies that are that are
built around it. So you know, you still have the
challenge of data fragmentation,data silos where departments or
certain departments or certain parts of the business may
operate to very specific requirement and simply because
of high level of governance or compliance.
Yeah, compliance and then also regular separation of controls,

(15:18):
express controls. Yeah.
And so that makes it very difficult for you to unlock,
right. And in terms of, well, you know,
for example, a new fintech division or you know, a digital
bank may be able to move much faster because whilst I still
have the regulatory compliance, they don't have the technology
debt and they don't have the legacy systems that they may
have to, you know, they would have built something ground up,

(15:39):
which would have been more cloudnative, would have been
leveraging more of these emerging technologies.
And so I think that in that context, it makes it much easier
for for them to go and accelerate.
And, and what tends to happen isthe fragmentation exists in
large enterprise. And I would say more digital
native or digital born companies, it's a less of a
problem, right, just because they have a new digital stack or

(16:00):
a new definition, you know, and I keep referring back to, you
know, if you look back in in theold days of the Lamb stack and
you know, that's evolved node JSand and so forth, right?
So the new, new tech stack kind of defines, I would say the the
pace of innovation, right on on that front.
But the data segmentation or fallow nature of that data
continues to be a challenge. And so I think fundamentally,

(16:23):
you know, that's one area where you know, if you have standards
based interoperability or if youhave a standard communication
protocol and you know, Kafka becomes one viable option where
if you have the ability to talk Kafka and and and and and, and
stream in and out of Kafka, it becomes a much easier way to
share data, right. So that's, that's one challenge.
The the other one is cost skill shortage.

(16:44):
Not everyone's got the skills. Which I want to get to the
point, right. So I think you've got the point
first about the tech debt, then because of tech that also
partially because of skill shortage, which is about 91%
that was cited in the report. Why is the real time visibility
becoming such a critical differentiator for today for
enterprises? Yeah.
I mean, look at look at the engagement on how you interact

(17:05):
with your applications today, everything you and I do probably
is through a mobile device, right?
I don't really need to log on tomy laptop to actually engage in
my mobile banking app, right. So today if I want to make a
transfer in the good old days, I'd have to log into Internet
browser, I'd have to log in, usemy HSBC key or whatever bank I'm
with and I'd get a, you know, a time based authentication login

(17:29):
and I'll do my transfers. Today everything is based on my
mobiles through, you know, a notification that's you know,
all through my notification willall in real time come to my
mobile device and I'll authorizetransactions.
But the whole interaction has changed to real time.
And whilst I think in the past the, the nature of the
applications were that they werevery responsive, they were

(17:50):
web-based applications, but theydidn't have the same demands in
terms of real time capabilities.And so if you look at the
interactions with your bank or if you look at interactions with
your applications like 15 years ago or 10 years ago, we didn't
really have ride sharing. Right now, being able to track
where my Uber is or where my Grab is is kind of a given,

(18:11):
right? It's.
No, it's just kind. Of it's, we, we kind of assumed
that this is normal, right? It wasn't the normal 10 years
ago, right? And so that or whatever, you
know, the time when Uber took off and Grab took off.
So I think in that context, the nature of how we interact with
applications, how these applications actually feed data
and, and you know, just the relevance of that data becomes

(18:31):
critical. Because just imagine if I'm in
a, you know, if I'm actually looking for my Uber and that
information's delayed by 5 minutes, it's probably not very
good use of, you know, data for us, right?
Because that doesn't provide me the data I need in real time
that I want for the context, which is I want to understand
where my Uber is, what's my ETA to my destination and so forth.

(18:53):
OI think that relevance of context of time and the real
time nature and we kind of live in this society where.
They're right here right now. Yeah, right here.
Everyone wants things today now,right?
There's no concept of I'll wait for 5 minutes or 10 minutes.
If you ask ChatGPT a question, you probably want that pretty
quickly. You don't want to wait 10
minutes for an answer. But then when reasoning you
still need one minute and 1/2 ormaybe even like 3 minutes to get

(19:16):
an answer. Yeah.
I mean, I think with the right models, obviously you know, they
talk about fidelity and velocityin, in terms of the AI models.
I think it's not just important to have the speed, but also the
accuracy of response. And so in many occasions, it's,
you know, if you actually learn to use these tools properly, you
realize you need to ask the right framing of the questions.

(19:38):
And in some cases, you actually need to go use the right models
and and and and when I'm using some of these tools, I'll
actually enforce deep, deep search, right?
And so that I get a better, moreactive response versus getting
the faster response. So I think one another
interesting part of that report was that I think 94% of the
people, I just make sure I get the numbers correctly that AI
use in business analytics is setto grow.

(20:00):
But what are the more promising applications do you see gaining
traction from your perspective? I mean, I think if you look at
enterprise applications, everyone's kind of shifting
towards AI enabled applications,right?
So most of the enterprise guys are kind of or I would say the,

(20:21):
the larger enterprise applications or technology
providers are AI enabling their stack, right.
So they're like, hey, how can weleverage AI to serve our
existing customers better? Can we improve interactions?
So they'll may introduce AI chatbot services, they may introduce
better AI through support. So with logging tickets or

(20:41):
engaging in in that if I'm a customer service agent, can I
get that experience better through a automated telephone
or, you know, AI agent that's happening in the enterprise.
So they're all kind of modernizing, I would say kind of
improving the quality of their interactions and improving the
quality of their applications with their existing install

(21:02):
base. Then you have this whole new
category of disruptors, right? So you have interesting
companies who are kind of building these voice to text.
You have interesting companies like, I mean, who are building
like kind of like all types of, I would say interesting use
cases around Health Science and and so forth.
Where, you know, if I'm a, if I'm a outpatient of a hospital,

(21:26):
you know, how can I make that more efficient versus, you know,
a nurse calling me and checking up on my, on my, you know, drugs
usage and making sure I've takenmy medications.
How can I automate that process?And even things like counseling,
believe it or not, where I want to be able to reach out to older
care patients who mainly have certain requirements
emotionally, mentally, which they need additional support on.

(21:49):
And how can I make that an AI LED approach?
And you'll be surprised so much training has gone into, you
know, obviously getting these models trained on certain drug
types and medication and, and not just that, you know, even
things such as understanding suicidal behaviour.
And so they look at all these things as being able to
hopefully make it a better opportunity for them to serve,

(22:13):
in this case, these patients. But to do it through a
completely AI model or AI LED model has been quite radical.
And but and those companies gonefrom being very, very small to
significantly large businesses. And so I think that's the
interesting space, like you've got a whole new category that
tends to happen in TECO every 1015 years.
We see like the next innovation.And I think AI definitely for my

(22:35):
lifetime will become the most important generational change
for us. Yeah, I think we went through
probably 3-4 technological revolutions over the last three
decades, IFRC, like since we talked about the software
engineering, then web mobile, now AI, it's like it's always
moving so quickly. So what's the one thing you know

(22:57):
about Confluent and the role of data streaming in AI that very
few do? Yeah, I think obviously the
Kafkin part is, is a, is a given, I think Confluence a
category leader in that space. We've defined it and we have
probably the most, I would say impressive cloud offering on the
planet and being able to scale it, However, to make that a

(23:18):
seamless process across being able to consume that as a
service, as a true hybrid offering is something I think
our, our, our larger audience may not be aware of.
And, and being able to integratethat across DSP where you have
the challenge of scaling Kafka, making Flink a very easy to use
consumable service. And if you have an experienced
Flink, I'm sure you have in, in your experience with AWS, but

(23:40):
Flink is a very complex engineering deployment.
I mean, just the operational overhead of running Flink is,
is, is challenging. And to consume that as a service
seamlessly across all of our cloud offerings across GCP,
Azure and and AWS is an engineering model, right?
It's actually a lot of work. And, and I think that

(24:01):
appreciation in some cases like,you know, because we just
consume services, we don't realise the complexity, but the,
the complexity of how we're running these systems at scale
and just our customers, you know, we serve our, you know,
5000 customers globally. It's, it's quite, quite amazing.
So that whole seamless experience, being able to
consume it through a cloud and then go back to on Prem and

(24:24):
hybrid to address sovereign requirements, I think is, is
pretty, pretty unique. So.
Can you walk me through say a real world use case in Asia PAC
or your Middle East where you actually help the business to
unlock that value true confluent?
Yeah, yeah, I, I, I would say the largest use case that we've

(24:44):
seen in probably in a long time is around the payments.
If you think about the world of payments and how payments have
been disruptive in Singapore, wehave, you know, QR based payment
codes. So we call it Pay Lao, pay now
depending on, you know, which bank you're with, but pay now
being the standard in the world of payments.
And you think of some of the large emerging economies, in

(25:04):
particular countries like India or Brazil, they have pretty
unique standards and they have avery, very large population to
serve. And so in India, we've been very
closely working around some of these areas, in particular with
the banks around payments and so.
The UPI interface. Yeah, through UPI, so NPCI,
National Payments Corporation India leverages a lot of Kafka,

(25:26):
a lot of Kafka. And if you look at the backbone
of their stack, I mean, they have a Digital India stack, of
course, which is largely open source.
But if you look, if you peel theonion on that and you look
through what's powering in the payment stack, it's a lot of
Kafka. And so, you know, being able to
kind of like disrupt, but also serve a new generative way of

(25:46):
accepting payments for a guy who's serving tea for, you know,
a couple of rupees and for someone who's doing large
transactions. And to be able to settle those
transactions across intermediarybanks in real time at a scale of
1.3 billion people, I think is something that's been really,
really amazing for that, you know, team to achieve.

(26:08):
And so we played our small part,of course, being the, you know,
one of the largest contributors to Kafka.
And we work closely with the therespective agencies in making
sure we build secure, highly scalable governance around that,
right? Because it becomes mission
critical infrastructure when you're serving the entire
company's payments. Payments you're also now down.
If you're down, you know the economy doesn't function right?

(26:30):
So, you know, consumers can't able.
To just ask this, right? So because you, you want your
data streaming platform now to be essential for the AI
readiness, right? And you can simplify say things
like data access, assuring data quality and maybe even enabling
governance. Can you elaborate a little more
on how these capabilities can actually translate to faster AI

(26:52):
deployment? Because I think enterprise
usually they have a lot of compliance conversations, they
have a lot more governance conversations.
How? How do you all actually navigate
that? Yeah.
So I think the areas you talked about when it comes to, you
know, we have obviously a schemeof registry, we have a pretty, I
would say stringent focus on role based access.
We we want the the data that's streaming to be relevant and

(27:15):
have strict security and governance in the context of the
applications or the access that you have for the data.
So if you think of the world of AI, it becomes not only
important for you to have that data in real time and to be able
to do it in a secure manner, butthen you want to be able to in,
in some cases, in a lot of the AI data has sensitivity around,

(27:36):
you know, maybe when I gave you the example of the of the
patient outpatient calls, you have context of patient data,
right? And you know, there may be
certain conversations. Because they're hyper compliant
and they need API hyper. Compliant and they need and
obviously there may be certain other areas that treating those
patients. So just being able to serve that
data is securely and in the context, I think the most

(27:59):
challenging part is in the context where it understands,
hey, when when I spoke to patient Joe, Joe had the
following and the follow-ups with this.
And now in the current context of the conversation and the
relevant drug medications that he's on or the challenges he's
now in real time articulating, you know, and he may be saying,
look, I need additional help on the following areas or I'm, I'd

(28:21):
like to speak to a doctor. Being able to serve that in real
time and to be able to securely do it in, in AI would say in the
context of localized sovereigns.And then also in context of
scalable systems that could be on public infrastructure, I
think is the the the almost likethe magic.
And also the added complexity ofthe AI responding correct in

(28:44):
order to pull the correct information across different
sources and then and then streaming back in front of the
person trying to receive the information.
Yeah. And if you remember, I initially
said in the context of where a confluent can play a critical
part, it's also the interoperable integration with
all the, with all the respectiveAI ecosystem.
Now, if you think about what AI is doing, it's working across

(29:07):
micro services, working across state that you know, lake
houses, databases could be in a different endpoint service.
To bring all that together and to do it in in a secure manner
and consistent manners, constantly serving that
information is where I think, you know it plays the most
pivotal role. So what would be your advice say
to business owners of enterprises now thinking across

(29:29):
things like data streaming, thinking about things or how to
enable their AI applications to run across this kind of
production workloads? What do you think would be the
do's and don'ts that you were asked tell them to do?
Yeah, look, I, I think every application, every use case has
very different requirements. And so being very clear and

(29:49):
articulating, you know, what arethey trying to actually solve
and has that been done before, right.
And in, in many cases, if you look at the AI capability and
what people are trying to do, I think in every context, in the
very good learning opportunities, if you go look at
what's happening in the valley and, and, and context of even
the Middle East, which I think alittle bit further ahead in some

(30:10):
of the use case adoption, But the learning opportunities is
the same. Like the, the challenge of
sovereign doesn't change in Singapore and UAE.
It's still the same challenge. The context of, you know, we
talked about that, that patient example where you know, you'd
have an outbound outbound patient service, which was Aled

(30:30):
those requirements would not change.
And see, I mean the, the what would change potentially is how
the data sensitive and Pdpas managed versus hipper and so
forth in the US. And so just a regulatory
compliance may change, but fundamentally the use cases will
be very similar. So I would say the do's and
don'ts is kind of really understand the use case, define
it really well and it's OK to have iterative process on that.

(30:52):
You're never going to nail the use case upfront.
You're going to have you're going to have the ability to
kind of redefine it because whatmaybe a very challenging use
case today, but on in two monthsor three months may no longer be
the case, right. And that's just the nature of
what we're dealing with AI everythree to six months, there's
been innovation that's not existed.
So if you look at even the models, the models themselves

(31:15):
have dramatically improved. I think NVIDIA talked about some
of their I I think I forgot whatthe name of the new chip is, but
not Blackwell. Not Blackwell, but they're
actually smaller and. Bigger ones coming out.
Correct. And it's like this is what
powered basically ChatGPT, it's free, I think you know, and it's
kind of like amazing to see the the innovation is not just

(31:37):
consistently happening on the chipset side, but the fact that
all of this translates to how the models have changed and how
the models will perform is pretty remarkable.
So I would say sometimes we spend a ton of time around kind
of defining the use case is making it perfect and you want
to learn the context of what's happened, how it's being
relevant in the US and other places where this may have

(31:59):
already been done. But then being able to just
iteratively drive it and to do it better, I think is a process
of just constantly accepting that things are going to change
and it's not going to be, you know, it's not going to be this
way forever, right? And, and I think that's, that's
something that people have to adjust to because we almost
think, hey, the problem I'm solving now is going to be the
same 12 months, 18 months on thetrack.

(32:19):
It may not be the case, right? It may actually be the case.
The hardest problem you're solving now may actually be the
easily solved in six months time.
But but the fundamentals don't change, right?
Fundamentals don't change. 1111 Interesting thing I find when
talking to CEO's, when they asked me about AI applications
or they tell me to that they want to move fast and try to get

(32:40):
the whole team to move with them.
And the first thing I always ask, can you tell me where your
data is? Can you tell me how the
different data streams work? And then they, they suddenly
stopped it and it's like, oh, and then you start to probe
deeper and then they realise that they need to make
investments in those areas. Do you see that happening all
the time when you talk to customers where they just tell

(33:02):
you, hey, you know, I would liketo do this fast.
But then when you start to look at the underlying, you need to
really help them, to educate to them why this is important.
Yeah, I would say fundamentally anything around AI, the data
infrastructure is probably the most important piece, right?
And so I mean, if there's a takeaway for the audience today, it
should be like without the data infrastructure piece being, you

(33:24):
know, clearly defined and havingstandards around that, you
really struggle to build a scalable offering.
And so you're spot on. Like, you know, most, most of
the challenges we see are they want to move really fast.
But as we've talked about, you know, there's been technology,
there's tech there, there's skills gaps and there's just
lack of consistency in some cases.
Which is what your the report highlighted very clearly on

(33:45):
that. And, and I think we're lucky as
a, as an organization, as a company, because we're kind of
redefining some of those areas. And, and it's a journey.
It's a journey that we somehow we, we kind of talk about our
customers going through A5 phaseprocess where, you know, today
they may just be a very small deployment and it then it
becomes a central nervous system, right?

(34:05):
So where, where the data data stream platform is core and
central to all elements of the business.
And that could be a context of AI, it could be a context of
just servicing existing applications or modernizing
applications, right? And so that entire journey, it
takes a period of time. It doesn't happen overnight.
So, so that's been, I would say our learnings that we, you know,
our customers themselves, you just start departmental and or

(34:29):
you know, solos little individual developer project.
That's Kelso department department to, you know, part of
the company to enterprise wide. And so that's that's how we also
we're. Going to redesign when it comes
to say the modernization part ofthe enterprise application.
Let's say they want to change this particular enterprise
application, say a chat bot may be running on all legacy
infrastructure and then decided that hey, now I want to transfer

(34:52):
to a data streaming because of the way how conversational AAI
is moving because you were trying to have this high speed
real time communication with theagent and with the client.
So how much of that RE architecting is now required?
It's, it's, it's actually a really good question.
So what we've done is we've built within Confound Cloud,

(35:14):
which is our offering. We've built pre predefined
connectors or managed connectors, right.
So we have like I think 80 odd managed connectors
out-of-the-box. So we make it seamless for you
to interact and interoperate with some of these systems out
of out-of-the-box. See Genesis were connection.
Exactly right. So the CRMS of the world, the
database platforms of choice andand so forth, you know, and then

(35:37):
we've also gone and there's a whole ecosystem around this.
And so that's fully managed the conflict managers, but there are
there are connectors that are built part of the community
which may be self managed. And so there's probably like
A150180 connectors. So you also have like say a
marketplace or some kind of third party providers that can
actually help to actually customize confluent to say maybe

(35:59):
certain specific goose cases. So what we, what we find is
there's a whole bunch of ecosystem community based
connectors out there as well. So Confluent has its own and we,
we, we obviously want to make sure our connectors are
scalable, enterprise grade and so forth.
And, but there are a ton of community connectors which have
all sorts of use cases. And, and you know, someone's

(36:20):
found a particular connector that they want to build an
intraoperate. So when you talk about kind of
like these legacy systems, it's about unlocking it.
So you, we may not have a nativeKafka client, but someone's
built a custom connector that talks to that protocol and
serves it as Kafka. So there are multiple ways
around it. And so I think that's been,
that's been a really interestingway for our organization, our

(36:41):
customers to benefit. It's like they don't have to
necessarily RIP and replace, butthey're interoperating through a
a protocol which is now built bysomeone who's built in the
community. Yeah, because what I I think is
one of like the untold stories of AI applications or generative
AI per SE. It's actually the modernization.
And you can think about say likea Cobo, our programmers said,

(37:02):
now you can use a lot of degenerative AI to move them
from Kobo to Java. And then, you know, that also
comes together with the financial data streams that we
are talking about in new banking, transaction banking.
So that's why the whole concept of thinking about modernization
where the data streaming is involved is actually not an easy
task. That's why I thought that

(37:22):
question would be good. Was, was actually very
interesting when you told me, oh, actually there are actually
connectors to to get there. But I have this question for
you, right? What's the one question that you
which more people would ask you about Confluent or DSPS?
I think the most obvious is around interoperability, right?
So I'm going to ask you the question.
What about this interoperability?
So how do we better work across like if you think one of the

(37:45):
biggest challenges is I have allthese disparate systems, I have
all these fragmented data. How do I interlock?
How do I unlock this data and how do I make it easy and
seamless for me to share that data?
And so I think the, there's a, there's probably a lack of
awareness around some of the amazing things we're doing
around how we provide that interoperability through Kafka

(38:06):
as a standard protocol. And we have our entire connect
ecosystem, which is, you know, we actually have a program
called Connect with Confluence CWC where you can come and even
build custom connectors as I mentioned earlier, or you can
just leverage our connectors, pre built connectors, right.
But to do that across the enterprise domain or to do that
across your entire infrastructure is pretty, pretty

(38:27):
impressive. And so many cases customers
don't realize the power of our connectors.
They understand Kafka because they've already built something
ground up Kafka, but you may have other stuff which doesn't
work as well, and that's where we could really unlock and
harness the power of data. Wow, that I didn't realize.
So what does grade look like forConfluent and within your region

(38:49):
of interest? Say, what does success really
means for the business in the next 5 years for you?
Yeah. I mean, we, we've been probably
experiencing some of the most, Iwould say, aggressive growth in
the region. If you look at our business and
I've been with the company for just over 3 and a bit years.
And since I've seen the businessin Asia Pacific grow, our

(39:10):
Southeast Asia business continues to to hum along.
And our Indian business has beenprobably one of my strongest
performing businesses and Australia has been obviously a
foundational part of our our growth journey.
So if I was to look in the next 5 years, I think our business
will become even larger. The interesting part is the
adoption of, of some of these technologies and just the scale

(39:33):
of these technologies that what we're serving in Indonesia and,
and India and and in China for that matter, a very, very large
economies which have tremendous appetite for some of these data
systems. So we talked about NPCI, we
talked about payments that problem is in billions, right?
And so when you think it's. Going to be in trillions.
It's going to be, well, they're going to get to 100 billion

(39:54):
transactions a day. How many systems in the world do
that, right? And so when you think about the
context of what we're solving inthe scale of the systems, I
think there's just so much to bedone Over the next five years, I
think you'll see more and more emergence of that in this
region, more than any other region.
Like Europe's heavily regulated but still doesn't have the same
level of scale and sophistication in some cases on

(40:15):
the tech. And I would argue that some of
the some of the innovation that we're seeing in Asia is actually
a little bit further ahead of now of Europe, which, and that's
not because Europe's not capable, it's just a heavily
regulated industries make it a little bit more challenging for
them to move. But then we also have a very
unique problem, you know, when we're solving for billions of

(40:35):
people, you know, for us, you know, the fact that we can move
a little bit quicker, it makes this a very unique market.
So five years down the track, I think Confluence Asia Pacific
business will be probably you know, I'm saying is one of the
largest components of growth forthe.
Company, do you think significantly it's going to be
powered mainly by generative AI applications or AI applications

(40:57):
from the enterprise? I have no doubt, I think over
the next 5-5 years we'll see a massive transformation across
how we interact with applications.
You know, we today are very muchmobile led and I think that
mobile experience will become more AI generative lad.
Or maybe we can go to glasses. Yeah.
Or maybe even you wave your handand it's.

(41:18):
Already happening, right, where today you have the glasses.
Maybe they'll be just to controlmore interaction there or if we
can get these neural chips working correctly.
Yeah, I think that's gonna be really exciting era.
And I don't think it's too far like we're talking and the
context of robotaxis coming out by 27.
Yeah, in Dubai where I spend time as well, they're going to

(41:41):
start that next year, I think. So it's actually not too far
away. And actually almost everything
that we just talked about in thelast 30 to 30 seconds to a
minute data streaming will enterinto the lens so they can see
the real time information. Yeah.
And then the self driving cars needs to be able to access all
that is actually data Streaming is very bothered.

(42:02):
It's what I refer to as AI on the edge.
You have to be able to serve thedata on the edge.
And that's again where Kafka is so important, right where your
decisions are made on the edge. You can't wait for something
good to go back to the cloud. If I'm driving my Tesla and I
need to be able to process all that information within my own
processor on the car, I'm not going to extend the query back

(42:24):
to the to, to the cloud and say,hey, can you make a decision
while I'm driving? That would be a bad, bad
outcome. So AI on the edge is is a very
important part and that's where a lot of these embedded systems
actually leverage would leverageKafka as a protocol.
Comma Many thanks for coming on the show and I really appreciate
this conversation. I think you actually bring a a

(42:45):
very interesting dimension. I think a lot of my audience may
not know the importance of why data streaming platforms are
specifically in the data site isso important to what's going to
power all the generative AI applications or even you know
all the next few form factors from glasses to self driving
cars. So in closing, I always have two
quick questions. Any recommendations which have

(43:07):
inspired you recently? Well, I used to work for an
amazing guy and I think you should, you know, I spend a lot
of time, he's one of my mentors and you know, someone called
Bible Center, He's the CEO and founder of a company called
Rubric. They're in cyber, data
cybersecurity space. And he's been, you know, he
always inspires me to do, you know, amazing things.

(43:28):
He moved to America's when he was straight out of college.
You know, he kind of went to IIT, failed a few times before
he got into IIT in India, but then he ended up being an
American and built this amazing company.
And I think he's the way he defines it is that, you know,
ultimately everyone has a visionor everyone has a dream.
How big you make that is really the limitation you enforce on

(43:51):
yourself. And you know, it's kind of like
a interesting perspective on it,but you know, he, he really, I
think exemplifies the, the creative approach of just having
no limits. And he continues to do amazing
things. So I learned a lot from him and
he has interesting posts. I encourage you guys to go check

(44:12):
those posts out. And if you want a good read,
hard things about hard things isgreat.
Like Andreessen and Ben and Ben and Mark have a great book on
their journey of building company and, and just some of
the hard decisions that you haveto make when you're building
companies. Both.
Both I would say very aspirational and good reads.
And how do my audience find you?And please feel free to tell me
because I'm probably going to put a link to the data streaming

(44:34):
report. But what else do we need to know
about you all? Yeah, of course, I'm on
LinkedIn, so you can find me on LinkedIn.
That's that's pretty easy. But also check out Confluence
website, so confident dot IO you'll learn a lot more about
data streaming. We in fact just launched the
Data Monster as a as a theme. So it's a it's a fun way for us
to describe the challenges of data fragmentation and so on and

(44:56):
so forth. And you then definitely find us
on any channel from Spotify to YouTube and of course LinkedIn
as well and come up Many thanks for coming on the show and thank
you for sharing. My pleasure having me, thank you
so much.
Advertise With Us

Popular Podcasts

CrimeLess: Hillbilly Heist

CrimeLess: Hillbilly Heist

It’s 1996 in rural North Carolina, and an oddball crew makes history when they pull off America’s third largest cash heist. But it’s all downhill from there. Join host Johnny Knoxville as he unspools a wild and woolly tale about a group of regular ‘ol folks who risked it all for a chance at a better life. CrimeLess: Hillbilly Heist answers the question: what would you do with 17.3 million dollars? The answer includes diamond rings, mansions, velvet Elvis paintings, plus a run for the border, murder-for-hire-plots, and FBI busts.

Crime Junkie

Crime Junkie

Does hearing about a true crime case always leave you scouring the internet for the truth behind the story? Dive into your next mystery with Crime Junkie. Every Monday, join your host Ashley Flowers as she unravels all the details of infamous and underreported true crime cases with her best friend Brit Prawat. From cold cases to missing persons and heroes in our community who seek justice, Crime Junkie is your destination for theories and stories you won’t hear anywhere else. Whether you're a seasoned true crime enthusiast or new to the genre, you'll find yourself on the edge of your seat awaiting a new episode every Monday. If you can never get enough true crime... Congratulations, you’ve found your people. Follow to join a community of Crime Junkies! Crime Junkie is presented by audiochuck Media Company.

Stuff You Should Know

Stuff You Should Know

If you've ever wanted to know about champagne, satanism, the Stonewall Uprising, chaos theory, LSD, El Nino, true crime and Rosa Parks, then look no further. Josh and Chuck have you covered.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.