All Episodes

September 27, 2024 • 43 mins

How can you handle PHI securely and avoid a $20k cloud bill in a day when your app goes viral? Push your AI to the edge! Repeat founder Mandip walks us through lessons learned from a variety of successful businesses and how to take advantage of edge AI.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:00):
Welcome to Artificial Insights, the podcast where we learn to build AI people

(00:04):
need and use by interviewing product leaders who have launched AI products.
I'm your host, Daniel Manary.
And today we have Mandip.
And Mandip, I met through an event that he ran at Techpreneurs and other
CommuniTech events here in Kitchener, Waterloo, and he's been a serial tech

(00:25):
entrepreneur with experience in analytical skills, AI ML, visual mobility,
and team building.
Currently works for Vite HR and Edge AI solutions.
In the past, he's also founded HeyFlyer and I Can InfoTech.
Mandip is passionate about helping other entrepreneurs succeed

(00:46):
through mentorship and networking.
So Mandip, the question everybody's going to want to know is, are you an AI?
AI, that's the new normal word we can consider as our new phenomenon, but I
can consider it as a new phase of programming.
So AI is just a new phase of the programming, which we were using

(01:10):
since decades and decades, but it's now more powerful with the automation of
the data intelligence and everything is on a fingertip with the things.
So I can say that there's two types of the AI I always like imaging.
The one is an elastic AI and the second is imaginary AI.

(01:31):
So people are considering like AI will replace the jobs, AI will change the
things, it will replace the definition of the job, but it will not replace
humans at any cost or at any time because humans are the one who is driving
the AI so, and we are the one who is creating AI and using the AI.
So we need to be very positive about the AI.

(01:54):
That's my first advice to everyone.
That's AI is the one kind of a software or a hardware or any tool which will
help us to be more productive.
But would you say that you're not an AI?
Yes, I'm not an AI.
I am human.
That's great.
Because I just saw this morning a post by Lattice and they're an HR software

(02:20):
and they say you can now officially hire an AI through their platform and they'll
give you like a record for that employee and other documents that are like legal
documents for a human.
What do you think about that?
Yes.
So that's the interesting space in terms of the AI.
So let's consider that with that context, I'm giving you a little bit of

(02:42):
that context, I'm giving you more detailed interactions, which we had with
one of the biggest no-firm from our hometown Kitchener, one of my friend and
a lawyer to hire a lawyer is expensive thing.
But for a normal question and answer, sometimes you are not hiring the lawyer.
You are just like, okay, I will try.
If something will come up, I will go with a lawyer and anything.

(03:03):
So he created a chatbot, will give you the basic Q and A and things.
Well, you don't have to book the appointment with the lawyer, but if on a
certain stage you need to book the appointment that will come up and then
the human will be interacted.
So the same space into the HR thing is happening.
So the HR is doing the repeatedly work of like contract creations, those sort of things.

(03:24):
So in our product also implemented the same thing and we put it, we use the AI
as an API services from the chatbot.
We created all those sort of the templates in our documents,
creations and things that's making the HR's life much more easier.
And on top of that, we created a one security or the advice we can consider

(03:47):
as an advisory layer of the AI.
So whenever anybody is sharing the confidential information on the GPT, we
were making sure they will not sell the prompt contains some like private
information, like can you create an offer letter for me, for Daniel, for this
position, for this salary, this that instead of that, we are saying like,
Hey, create the generic one and then embed your information as a template part.

(04:10):
You have an advisory layer, you called it.
And that would be something that gets run in front of the chatbot so that
you don't leak personal information.
Yes, exactly.
Okay.
Now there's another challenge our industry is facing.
We need to educate and upskill our current talent irrespective of its

(04:32):
IT or non-IT or any person to use the AI and Communitech has been started
a good initiatives called the good AI thing.
So all the company needs to take care of the AI as in policy and as in basic
training of in onboarding or the regular process for them.
So they will be very responsible by using the AI.

(04:54):
So if you are answering any email by AI and you are putting that, Hey, I
received this email and can you respond that?
That means you are leaking a very confidential information of
your company on a public domain.
So you need to create that advisory layer in the terms of where you need
to provide the basic information of where to use AI and how to use AI.

(05:16):
What would you say is the most important thing to know when you're starting to use AI?
So if you are using any AI platform, you need to be careful on the front of like,
Hey, what's your company's guideline?
What's the things you are using?
So there's a tool use cases.

(05:36):
I always explain to people who are using and specifically non-tech people.
That's AI if you're using for your work stuff to being done.
And there's a personal thing.
So if you're doing any research on your personal business, like for your upscaling
part or anything is that you can use irrespectively any platform with whatever you prefer.

(05:58):
But whenever you are doing your work related stuff.
So let's consider I met a one of my friend, he is working for a pharmaceutical company.
And they used to like go a deep dive with the white papers and all those sort of the
things on the specific subject research and things.
And they're like asset is the one which the drug they are building and all those sort

(06:21):
of information, they are having very confidential information on the research site.
Intellectual property.
Yeah.
Intellectual property site.
And as well as there are certain personal information on the personal record
systems and those sort of the things.
So when you are mentioning those confidential data information to a public domain, so people
are considering, Hey, I'm like interacting with one AI, generative AI tool.

(06:45):
So it's, I am only the one who is using it with my own account.
So that's the wrong perception people are having because that LNM model is learning
from your data sets and your thing.
So you need to be careful.
Does your company compliance with that part or there's a private AI available in the

(07:05):
terms of if you are using, let's say Google workspace, you can use Gemini from that part.
Have you get the permission from your IT team to use Gemini or you are using your personal
Gemini and leaking your company's confidential information through that personal Gemini.
What would your advice be to executives who want to think about data privacy and still

(07:28):
have their team use generative AI?
So the first and foremost thing I will suggest to them is like create a basic framework of
what to do and what not to do, like what to say and what not to say.
And if they can create that and they can say those policy to the people, like what

(07:49):
information they can say and what they can't say on any platform.
Because we've seen like enough news about like highly confidential data has been said
people got terminated after those things.
So before reaching to that stage, first to work on the part of creating the basic
guideline and things.

(08:09):
And even you can use generative AI to create that.
So that's a funny part of the AI.
You can use AI to restrict the AI.
And have you had to make such a policy for one of the companies that you work for?
Yes.
So there's two types of policies people always welcome in a way.

(08:33):
So you need to create a policy where you are focusing like how you can reduce your
workload with the AI.
And in that policy, you can embed the things like, okay, you don't have to do that.
And you can do that.
If you will do that, you can save your time, your energy, and you can focus on the
thing.
So it shouldn't be more productive training as compared to the compliance training.

(08:53):
So with that sort of the approach, you will be able to create the positive vibes of
the AI and the like, your team will not see like you are threatening them for using AI
all things, but it will be more educational based things where you are providing the
very minimalistic approach of like, hey, we are on a Google environment for our

(09:15):
things and we are coming up with a Gemini, next upcoming plan.
Or you can disclose your future AI adoption strategy for that.
So they will be aware like, okay, this will be coming from the company or I'm not
legally allowed to say the confidence in data.
The reason being why?
This is the reason.
So they will be aware like if something is happening, this is the reason behind that.

(09:37):
So then I always advocate that things because employee respect all the legal
compliance because they are there to perform the best duties.
And once you will give them the reason, they will be more than happy to follow
those things for themselves.
I like that angle too, where you said the focus was on helping people to be

(09:58):
productive rather than on funding them back from something that they would
normally have done.
And hindering them, tell them what they can do and explain to them why
they can't share something.
Yeah.
Awesome.
Now I wanted to ask you about the AI powered products that you've launched

(10:24):
from a former conversations that we've had.
It sounds like there's been quite a few.
You had a number of smaller products as well as larger products.
I think which product has been the most commercially successful and why?
Yeah.
So let me give you like different aspects of the things so you can get the sense of.

(10:44):
So we were playing around with the AIs, like I can say two to three years down
the line, we started doing that.
We failed miserably with a few products and things in the terms of results, in
terms of the cost, in terms of the use case side.
So from that we learned things and the most successful use cases I want to

(11:06):
discuss is the first we helped one of the claim processing side on the insurance side.
So there were long delays and the long processing time where the human has been
involved for the processing of the first layer of the team and that is mainly
extracting the data from the PDF files and to make the certain decisions.

(11:30):
And I will not say it's AI, but it's a workflow automation, but workflow
automation with the proper data and the analytics part, which one is associated
with that.
So we started with that.
Then we captured the certain, we were performing all those sort of the
cloud AI services and the things.

(11:50):
So instead of implementing the whole new AI for a company, we prefer to use the
biggest companies AI models and use that for the use case and training for their
data set.
So we did that.
And then the core challenge which we were facing was the cost because the
market is ready to adopt the AI, but market is another way is not ready to

(12:11):
spend more money or the AI thing.
So the cost was a center point for that.
And then the processing cost for clean was like super expensive.
The reason being why we were running like a three AI services, AI as a service
models from big cloud companies.
And that was the creating like a good amount of cost, like nine to $10 sort of

(12:34):
the cost.
And still it was like reducing eight to 10 main hours of the processing that
thing.
And then we got some good news from like big players, they were working on the
device AI side and they launched their ML libraries for that.
And we implemented the each AI sort of the data processing over the edge and we

(12:54):
figured out and we created a filter data JSON formats from the each AI side, which
has been done on some cloud side.
So we reduced the efficiency issue with the AI side.
We reduced the efficiency issue, we improved the efficiency, we reduced the
cost, we reduced the waiting time and those sort of the thing.

(13:16):
And we created basic workflow automations with the AI powered data.
And instead of human to read out all the PDFs and things, we created conclusion
based things and process from them 20 years of the data sets and made the
decisions like, okay.
And then we informed the users this decision and this result has been

(13:39):
achieved or created by the AI algorithm.
If you are happy click next, we will move forward for the next step that the
human has been involved.
If you are not okay with the result and the data parameters, click here and the
human will review as the traditional process and out of 20 files, like 18

(14:00):
files had been agreed by the users.
So that's a two way communication we need to create by providing them the
information, this is not the human.
So to whomever I'm speaking, they are like, hey, in a customer service, whenever
you are putting the AI, it's the like chat, not AI, but chat mode.
That's the one is a big headache and I don't want to speak with the AI and I

(14:21):
think so like you need to create a, like you need to communicate to the end
customer and as well as business users that this is the AI or the machine
that's in the decision, but you need the human still there's a human angle is
that.
Wow.
You said, you said a lot of things in there that I think we can dive into.

(14:42):
And the first thing was you mentioned that it wasn't really AI.
Why would you say that?
So there's a perception of the AI is the things like whenever anybody's
thinking it's like, Hey, I want to create my own LLM.
I want to create my own AI.
So let's consider you and me back like 20 years back when the email has been

(15:04):
done, this has been started and I think so we are using email services from
Gmail, Yahoo and Hotmail sort of providers or our hometown based Blackberry
providers and those sort of the thing.
But if you are saying that you are creating your own mailbox services,
that's a different thing.
So you're using the AI from any bigger providers or the very well trained

(15:28):
models.
That's, that's the one where you're using the AI.
But when you are saying, I'm creating the AI, that means are you creating
the LLM?
Are you creating completely your own AI?
That's a different story itself.
And that requires tons of effort.
That requires tons of money to, it's not like, you know, like a thousand bucks.
You can get your own AI and things.

(15:49):
And it's a time consuming things like people things like opening has been
launched and overnight it became successful.
No, there's a tons of engineer powers and that tons of data sets
requires for the AI.
Yes.
And I think open AI too says you can hire their team starting at a million
dollars to help build the AI on your own data.

(16:12):
So it comes down to in this case, you would say it was successful because you
didn't have to spend the money and the time to build your own from scratch.
Exactly.
And that's where there's a thin line is that like I am building AI or I'm using
AI to build bigger and better solutions.
So you need to focus on the purpose of and result you want to achieve versus

(16:37):
there's a hundreds of open source AI models available.
There's a hundreds of libraries available that can be done like so many.
So instead of saying that's like, we are building AI, we are creating AI.
Instead of saying we are using the AI and building the solutions that can be
driven by the AI that makes more sense for my psychology.

(17:00):
So to use AI to build a solution.
Yes, exactly.
And AI will make the decisions, but it's pre-built AI, which we are using not
we are reinventing the wheel for like, let's try to train the model from a one
line to the second line to the third line and implement the like learning model

(17:21):
from one to a hundred instead of somebody has been created a model that
already like having a 10,000 parameters in that order, one million parameters.
So using that million parameters and make the decisions on the million
parameters will give you a much more fine tuned results as compared to
starting your own model on things.
And one other thing you said that I thought was quite interesting was you

(17:44):
said it costs eight to $10 per claim analysis, I think at first.
Yeah.
And even though that saved eight hours and human time, was that something
that your stakeholders were interested in or was that still too expensive?
So then there weren't like two sort of, uh, their mindset was that so they

(18:07):
implemented AI, they were like, oh, I'm going to do this.
They were happy to achieve the results in terms of the things.
But when they seen the bigger terms of the metrics, like they rolled out for
like 10 to 15 policies and things when they're considered as an a hundred
thousand things.
So it was coming as a million dollars sort of the yearly cost and they still

(18:29):
need the humans.
They cannot fire the humans because there's an angle of that.
Like I need human interactions, anything.
So we were reducing the human hours and the efforts, but we went, you cannot
say, okay, there's a 10 people, there's a hundred thousand dollars seller is
there.
We are replacing them with this million dollars.
Yeah, I think so.
It was not happening.
It's not happening in any industry technically.

(18:49):
And there's another thing.
Whenever we are replacing the humans, we are providing them the productive
tools, but we need to make sure this 10 hours has been saved that can be
utilized on the other productive work too.
So if they are sitting idle for that time, that that was a big sort of our
side, you will not be able to justify the saving and the things on that side

(19:11):
of the eight to 10 hours, because they were selling how many users will say
like, I need the human interaction.
If everybody's saying the same thing, then there's a no point of implementing
the technology, which will cost them a huge fortune.
And then the state human required.
So that was the reason like we focused on how we can reduce the cost on the
first phase.

(19:33):
So that can be rolled out in a smoothly manner and it can be scalable for the
like managements and things to accept the things, humans are there and AI is
as an add on of that part.
They weren't looking to cut headcount.
They were just looking to make the workforce more efficient.
Yeah, exactly.

(19:54):
And that's the good message we received.
And that's the thing which we want to pass the same message to everybody who
is in a non-tech jobs there.
So AI is not replacing you.
AI is providing you the tool, which can save you all the time and you can
enjoy much more sunny days with your family.

(20:15):
And hopefully do less work that means nothing.
You don't have to read a hundred documents every week just to say yes.
Yeah, exactly.
I also really like that story of you had a new AI thing come out.
What was it that saved you all of that processing and bandwidth?

(20:36):
So you did it instead of in the cloud.
What was that AI innovation?
Yeah, so it's AI over the edge.
Currently we are like processing or majority whenever we are considering AI
implementation things.
So we are considering biggest thing is in cloud.
Like we will implement this, we will use this AI services or we will use this

(20:56):
cloud AI services.
And that's the one where we seem like a stage failure of all the AI initiatives
which we have started because it was costing them like the one learning model
which we started.
We were doing good.
The model was learning very well and things.
And the minute we launched, we received $20,000 from the cloud providers in a

(21:18):
one day bill.
The reason because it started learning from all the metrics.
So it was in a travel sectors and in that industry.
And that was learning each and every cities, tourism sectors, and those sort
of the data and the data parameters which we tested for one particular city,
one particular behavior, and we replicated for the 10 behaviors.

(21:42):
But when we launched the actual AI for the end users, it started digging down
into each and every city, each and every town, each and every behavior.
And then it wasn't like massive API calls was happening.
So you ended up having a big bill because you'd set it up geographically and

(22:03):
your tests were smaller.
And then in production, it was now suddenly maybe not the whole world, but
many cities.
Yeah.
And they immediately terminated the whole initiatives because they received like
20 grand in a one day.
And this is not the tool which we want to use or implement in our company.
In the same exchange, we were like the more like cost was coming as in when we

(22:29):
were processing imaginary data, video data, audio data, or the data which
requires the processing front on the media side.
So that was the massive requirements because the tax processing is not that
costly as compared to this like image processing and all those other things.

(22:49):
So we by forgetting that with the certain classification of it.
So there's a certain good amount of libraries available that can be run over
the edge.
Nowadays, if you and me are considering in our pocket, we are like having a
terabyte sort of the space devices with us.
Micro cards are more capable than the hard disk which we are having in our

(23:12):
computers.
So there's another biggest worry from the enterprises side is the data privacy
as well as the data products outside.
That's the one which triggered us like, Hey, something can be done on that space.
And another one was the environmental damage.
So on the name of the AI, we are like burning thousands of computer hours to

(23:33):
process the data, which can be easily eliminated by the edge AI.
We combined the libraries like TensorFlow Lite, Core ML by Apple.
Then there's a different, different libraries that can perform the same
results can be done over the cloud as well as on the device.
So we combined that and we created a certain use case based models and you can

(23:54):
consider as an framework.
So that can be convert your image data to the audio device itself, and you will
receive the JSON format and things you don't have to rely on those sort of the
processing or the cloud.
So remote place and monitoring systems needs the video analysis, sports training
needs that video analysis on a lively mode.

(24:14):
So if you want to go with those live video products, live video analysis,
that's the crazy amount of cost you need to involve in that case.
In that case, if you are running your smaller models or the smaller algorithms
with the predefined process, the data over the edge, and then on the bigger
LLM you are feeding with a presets of the data sets, then the cloud LLM will get

(24:39):
like crisp data and provide you the proper reason sort of the thing.
So that's the things we ended up and I'm always a big fan of the open source.
So we are putting this as an open source using the open source libraries.
So we don't want to put it as a closed source or that's not making sense.
So we are launching that as an open source community platform and we'll

(25:00):
keep building on top of that.
Very neat.
So it sounds like in more than one case, you've had big savings in terms of both
cost and electricity when you put processing on the device and you either
extract the information that you need on the device, like from the audio,
or you filter it down and only send the parts you need.

(25:23):
Yep.
And this is another good achievement.
We did it for one of the enterprise on identity verification side.
So whenever you are uploading any identity document to the cloud, you need
to comply good amount of legal compliance you required on your cloud side.
And there's a cost, they will cost you a good amount of money.

(25:44):
So if you will process the OCR and with the library, which you can implement on
device and you can verify it and then you are likely processing the end result
instead of the personal identity document that will be like cost saving,
your compliance saving and it will give you a very like precise results with

(26:05):
that part.
So that also we achieved and then we realized, okay, data privacy and
compliance is another challenge everybody's facing in this piece when
they're implementing the apps.
Yeah.
So you even get around having to do something like HIPAA compliance because
you can process the document on the phone and not even send it sensitive

(26:26):
information.
Yeah.
So let's consider for one example, like you, you are onboarding a new employee
or new customer for your bank and you need to do their client and customer
data processing and that requires like 10 data sets and you just need to
perform those 10 data sets for that part, which can be performed on your
devices.
So you don't need to get those documents until those documents are verified

(26:51):
and you want those data, whichever you required from that particular identity
documents, you don't need to keep your identity for your server side.
And after that, you need to destroy those identity on the verification and
the things.
And yes, so let's consider you and me are using this, all the device side
verification and identity processing since decades and they are scanning our

(27:15):
driving license and they're extracting the data.
All those things can be performed over the edge and why we have to worry about
the cloud side.
So that sort of the things can be achieved and on top of that, they can put
the model on site.
So those models can perform the correlated works.
So we can consider it as an, um, their X and Y terms.

(27:37):
So that model will go into that DMS systems, document management systems,
and it will extract the data.
Like, Hey, this person already have a policy with us.
So this is the number, this is the things and that can be extracted and
that can be easily linked with the things and that, okay, this is the policy.
This is the driving license.
This is the team and everything has been tied up process the on device.

(28:01):
They are like making sure their claims and everything has been prepared.
And then at the end of the day, it's like consolidated as an one complete
claim and that can be processed on, on the cloud.
So you're saying one other advantage of doing the processing on the edge or
computer or phone or whatever is that you then have really easy data to match

(28:25):
with other data you have.
So if somebody just sends you a picture of a driver's license, it's a picture.
You don't know how to match it.
You need to extract it anyway, but if you get the extracted information, it's
as fast as any other database or just any other information that you have.
Exactly. And like there's a one more thing, which I would like to add in the

(28:46):
thing. So all this, that's why I'm saying like all these technologies were there
since so many years we were using it, but AI make it much more faster, easier
and accessible for all the interlinking things.
So whenever you are adding more data and more powerful things for the AI, you
can do it faster and more easily.

(29:08):
So if you are adding more data and more powerful things for the AI, you are
achieving more faster results.
So on the AI side, we can use the current all the frameworks, all the
libraries and interlink the existing, the data, which we are having since
like decades.
So one company I can say like sitting on a like two to three terabytes data

(29:33):
usually.
That's the minimum amount of the data people are having in their like normal
sort of it's a lawful they're having the previous cases.
If it's any HR, they're having previous employees data.
They're having like one employee is having like at least I can consider as

(29:53):
in the 40 to 50 data parameters, they have to get it and they can use those
data to being more productive.
So one example is the one we got like the one request from the HR, like
everybody's asking like what's covered in my benefits and what's not.
Even if they have those like benefits copies with them, but every time they

(30:18):
will come back to HR and they will say like, Hey, is it covered in my benefits
plan that can be extracted from that particular PDF and that can be put it as
an dust and normal tensor plus Q and M model.
And that can be answered from that Q and A that can be achieved with a simple
like I can consider as in JavaScript code.

(30:40):
You don't need to build the whole AI for that, but we are sitting on a big
library of all the employees benefits and we can use that as an Q and A model
and we can achieve that.
That's a great point.
I feel like there's a lot of untapped potential in what we can do with
historical data.
So would you have any advice for an executive who might wonder if they have

(31:06):
untapped data, how would they find it?
Just open up your drive or Google drive or any sort of the folder storage system
and just see whenever you are performing the normal search and the normal data.
The currently you, you will be able to see from your, all those desks and

(31:27):
their analytics stuff, which one is loading the data from the different
different data source.
You can see yourselves like your company is having great amount of data.
And then you just need to figure it out, which data has been unutilized or not
provided the productive sort of the Q and A comes answering your questions.

(31:50):
Answering new things.
Or like, let me give you another cool, interesting thing.
So there's a multiple overlaps of the systems like each company is using.
So one of the biggest restaurant chain system, they were using the POS CRM
system from one vendor and then another one from the inventory side and the

(32:11):
forecasting on the vendor management and the quotation and those sort of the
side.
And they not made these two data parameters and they were like, okay,
like we are running some sort of the dashboard.
Then we are matching the things and forecasting of the inventory and
understanding the user behavior.
They tap into those data sets.
They just connected two systems and with the help of the current like AI

(32:35):
connectors, I can say it was super easy.
There's a tons of SaaS products available.
So you can connect to multiple things into that one and you can generate like
good amount of in-depth analytics of your own data.
And they were surprised like each and every order which they are placing on
the vendor side was linked with the customer behavior.

(32:57):
So they got like cost saving things instead of revenue boostings and they
increased their like cap tables with that part.
I think those are two different examples, but related where the first example was
anytime you find yourself looking for information, you can probably automate

(33:21):
something there.
Yeah.
The second one was if you have two separate sources of records or information
like customers and vendors, you can probably get something by putting those
two together.
Yeah.
And the biggest advantage we are having in today's world, I remember when I

(33:42):
started my first job as an like SAP developer, things like sort of the thing.
So we were so restricted to communicate with one system to another system.
So we like at that time we were not having the proper sort of the tools,
availability and the market was not open, but currently each and every product or

(34:05):
each and every service is coming with the open API.
So you can use those things.
You can put those data and use any ML library or any AI services, which one is
available in the market and you can achieve whatever in like, let's consider
you are thinking in the dream, like, how can I do that?

(34:26):
So quickly you can put that question to your own data sets and instead of
chargeability, your data sets will respond you with the high performed
data and metrics.
Like this is the metric in which you need to focus and how you can do that.
So specifically like you, you will get all the answers without the sitting up

(34:48):
your board meetings on like quarterly mode.
So you can, you can get answers faster than having to have board meetings all
the time.
Sounds like a good promise.
Yes.
And in a board meetings, you will have a very concrete information from your
data sets to make the precise decisions.
So you will not lose a six months of time by doing a trial and errors.

(35:12):
Immediately you will get the results and in the next board meetings, you will
have very precise information for moving forward for that part.
And you mentioned being able to, I think, ask questions of your own data.
Is there any particular AI that you would use to ask those questions?
That again, that will be depends on their ID infra like you and me knows

(35:34):
them very well, but TensorFlow Q and A is open source model and can be
deployed with any sort of the data source.
There's a one example, but depends on their current ecosystem.
So if they are having a more robust ecosystem, so they're tied up with a
one cloud provider, everybody's having those AI services in place.
They can use any AI services for that particular level of data.

(35:58):
They can use for that particular like Q and A sort of, or like data gathering
things.
And on top of that, they can add this on their own ML or AI learning mechanism.
So that's the biggest achievement they can do it.
So from the previous question, their system is learning for the next

(36:19):
question sort of the part, which we learned from the other very well, like
we can create those Q and A more like a productive on a learning side.
So they can implement that and they can go on a step by step process of like
creating the best AI practicing.
And another thing which I love in the terms of the current ecosystem, they

(36:45):
don't have to go outside of their current provider and they can get it.
They don't have to hire any like they call it the external data vendors, all
the things, their existing data.
Like you are using Google Workspace, PluginPage MNI, connect with the Google
Analytics, connect with your Google seeds and like start providing those Q

(37:11):
and A with those data parameters.
And you are going to analyze your website and your sales data together in
one single query.
Neat.
That is a good point.
All the major cloud providers and software providers have integrations
with AI and you don't need to go and find a separate vendor.

(37:33):
Yeah, exactly.
Cool.
And I think maybe the last major topic I wanted to ask you was what do you
wish that you had known when you started building AI products and what
difference would that make?
That's a good question.
I started building AI without knowing it's AI.

(37:53):
So I was doing that for just like how this generative model works or this
because the AI is a more from my interaction side.
I was not on a jet side sort of the AI.
I was more on the image processing side.
I was more on those sort of the things when like I got inspired by
Pranav Mistry when he showed his Samsung's ultra identity things whenever

(38:19):
you are taking a picture and you can remove the people from the background
and Google then adopted those sort of the things.
So I got inspired with that and I'm a big fan of specifically Aaron Man sort
of characters.
So I was like if we will interact with the data analytics dashboards, how
that can be interacted with those sort of lively way.

(38:39):
So that's how I got involved and I was directing the one thing which I was not
aware about the AI is the things will go on a wrong directions.
If you will provide the wrong data or the wrong aspects of the part.
So you need to be very careful for what sort of the data you are providing and
sometimes it will give you the false results or the false commitments.

(39:02):
If you will like achieve that in a wrong way.
So you need to be very careful how you are training your models, what models
you are using and what's the limitations and what's the advantage of those models.
I've heard the phrase garbage in garbage out before.
Can you give us an example of a time when you did get some kind of garbage

(39:27):
result?
Yeah.
So for one of the client, we were implementing the forecasting of from
their main requirements for the delivery systems and things.
And we rely on their existing road mappings and those sort of the systems
like which one is the best roads and optimize ways sort of the things.
And we were relying on a live traffic data and we tried to put all the fancy

(39:52):
things so we can do it to provide the more accurate data sets.
And by mistake, we put like the availability zones and those sort of the data from that.
And then the mistake we made was we not checking the data was accurate or not.
We just got the data.
We connected the data and data connector was working by themselves.

(40:13):
And we saw like all the wrong results because the data was not filtered.
The data which they added, their person was adding the same data into Excel.
They just copy pasting the data in a way.
It was not the real time data.
And the algorithm was thinking everything is precise.
Everything is working on time and everything is top notch.

(40:35):
And they created the delivery schedules based on that.
And none of the delivery schedule has been matched.
And none of the route was optimized for that party.
Even it was like giving them the routes go 10 kilometers that side and then 20 kilometers
that side.
And it was like complete mess of like they were like what sort of the AI you have built.
And we were like, oh, sorry.

(40:55):
And then it was the biggest mistake we made it like not to check the data quality.
So I regret for implementing that.
That sounds like a hilarious learning experience really.
Yeah, that is.
Okay.
So if I had to summarize, I would say that you would do differently.
Now you would check the data before you tried to build AI on it.

(41:19):
Exactly.
So if you have time for one more question.
Sure.
I'm interested in hearing what do you see as the future of AI?
That's the amazing one.
I can see the future of AI with the next generation.
It will be like they will not consider it as an AI.
They will consider it as the normal tool.

(41:41):
So like our parents, they were not aware about the computer powers and the things.
You and me were aware of a computer can be used with this sort of the power.
And the next generation is using AI much more precise way, like your younger one or my younger one is using AI better than us.
And they know how to interact with AI.

(42:02):
Like my younger one is using Google speakers since he was like three or four.
And they know how the things can be done with the AI.
So I can see the like bright future of this technology.
And the cool and the scary thing is that it's growing so fast.

(42:25):
It's hard to cope up with that part.
Yes, actually, I agree with that.
It's bright.
I don't know what it looks like though.
Yeah.
Thanks for listening.
I made this podcast because I want to be the person at the city gate to talk to every person coming in and out doing great things with AI.

(42:47):
And find out what and why and then share the learnings with everyone else.
It would mean a lot if you could share the episode with someone that you think would like it.
And if you know someone who would be a great person for me to talk to, let me know.
Please reach out to me at Daniel Manary on LinkedIn or shoot an email to daniel@manary.haus, which is Daniel at M-A-N-A-R-Y dot H-A-U-S.

(43:16):
Thanks for listening.
Advertise With Us

Popular Podcasts

Are You A Charlotte?

Are You A Charlotte?

In 1997, actress Kristin Davis’ life was forever changed when she took on the role of Charlotte York in Sex and the City. As we watched Carrie, Samantha, Miranda and Charlotte navigate relationships in NYC, the show helped push once unacceptable conversation topics out of the shadows and altered the narrative around women and sex. We all saw ourselves in them as they searched for fulfillment in life, sex and friendships. Now, Kristin Davis wants to connect with you, the fans, and share untold stories and all the behind the scenes. Together, with Kristin and special guests, what will begin with Sex and the City will evolve into talks about themes that are still so relevant today. "Are you a Charlotte?" is much more than just rewatching this beloved show, it brings the past and the present together as we talk with heart, humor and of course some optimism.

On Purpose with Jay Shetty

On Purpose with Jay Shetty

I’m Jay Shetty host of On Purpose the worlds #1 Mental Health podcast and I’m so grateful you found us. I started this podcast 5 years ago to invite you into conversations and workshops that are designed to help make you happier, healthier and more healed. I believe that when you (yes you) feel seen, heard and understood you’re able to deal with relationship struggles, work challenges and life’s ups and downs with more ease and grace. I interview experts, celebrities, thought leaders and athletes so that we can grow our mindset, build better habits and uncover a side of them we’ve never seen before. New episodes every Monday and Friday. Your support means the world to me and I don’t take it for granted — click the follow button and leave a review to help us spread the love with On Purpose. I can’t wait for you to listen to your first or 500th episode!

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.