All Episodes

September 19, 2024 23 mins

Downtime is a costly killer. But traditional predictive maintenance methods often fall short. Discover how multisensory AI is used to uplevel equipment maintenance.

Multisensory AI uses sight, sound, and smell to accurately predict potential equipment failures, even with limited training data. This innovative approach can help businesses reduce downtime, improve efficiency, and save costs.

In this podcast, we explore how to successfully implement multisensory AI into your existing infrastructure and unlock its full potential.

Join us as we explore these ideas with:
Rustom Kanga, Co-Founder and CEO, iOmniscient
Christina Cardoza, Editorial Director, insight.tech

Rustom answers our questions about

  • Limitations to traditional predictive maintenance
  • A multisensory and intuitive AI approach
  • Training AI to emulate human intelligence
  • Providing accurate and valuable results
  • Investing in a multisensory AI approach
  • How businesses leverage intuitive AI
  • Partnerships and technologies behind success
  • The future of multisensory and intuitive AI

Related Content

To learn more about multisensory AI, read Multisensory AI Revolutionizes Real-Time Analytics. For the latest innovations from iOmniscient, follow them on Twitter at @iOmniscient1 and LinkedIn.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:00):
(lively music)
- Hello and welcome to"insight.tech Talk,"
where we explore the latest IoT, edge, AI,
and network technologytrends and innovations.
I'm your host, Christina Cardoza,

(00:21):
Editorial Director of insight.tech
and today I'm joined byRustom Kanga from iOmniscient
to talk about the futureof predictive maintenance.
Hi Rustom, thanks for joining us.
- Hello, Christina.
- Before we jump into the conversation,
I love to get to know a little bit more
about yourself and your company.
So what can you tell usabout what you guys do there?
- I'm Rustom Kanga,

(00:42):
I'm the Co-Founder and CEO of iOmniscient.
We do autonomous multisensoryAI based analytics.
Autonomous means there'susually no human involvement
or very little human involvement.
multisensory refers to the factthat humans use their eyes,

(01:07):
their ears, their nose, tounderstand their environment,
and we do the same.
We do video analysis,
we do sound analysis,
we do smell analysis,
and with that weunderstand what's happening
in the environment.
And we've been doing thisfor the last 23 years.
So we've been doingartificial intelligence
long before it became fashionable,

(01:29):
and hence we've developed awhole bunch of capabilities
which go far beyond whatis currently talked about
in terms of AI.
We've implemented oursystems in about 70 countries
around the world in a numberof different industries.
This is technology thatgoes across many industries

(01:53):
and many areas of interestfor our customers.
Today we are going to of course talk about
how this technology can be used
for predictive andpreventative maintenance.
- Absolutely. And I'm lookingforward to digging in,
especially when you talk about
all these differentindustries you're working in,
railroad airports.

(02:14):
It's extremely important thatequipment doesn't go down,
nothing breaks,
that we can predict things
and don't have any downtime.
This has been something that I think
all these industries havebeen looking to strive for
for quite some time,
but doesn't seem like we'vecompletely achieved it
or there are still accidents
or the unexpected still happens.

(02:34):
So I'm curious when it comesto detecting equipment failure
and predictive maintenance,
what have been the limitations
to traditional approaches?
- Today when people talkof artificial intelligence,
they normally equate it to deep learning
and machine learning technologies.
And you know what that means I'm sure,

(02:58):
for example, if you want to detect a dog,
you'd get 50,000 images of dogs,
you'd label them andyou say, "This is a dog,
that's a dog, that's a dog, that's a dog."
And then you would train your system
and once you've trained your system,
the next time a dog comes along,
you'd know it's a dog.

(03:18):
That's how deep learning works.
The challenge with maintenance systems
is that when you installsome new equipment,
you don't have any history
of how that equipment will break down
or when it'll break down.
So the challenge you haveis you don't have any data

(03:40):
for doing your deep learning.
And so you need to be able topredict what's going to happen
without the data that you can use
for deep learning and machine learning.
And that's where we use
some of our other capabilities.
- Yeah, that image thatyou just described,
that is how I often hear,

(04:03):
thought leaders talk aboutpredictive maintenance
is the machine learningcollecting all this data
and detecting patterns.
But to your point, it goes beyond that.
And if you're implementing new technology
or new equipment,
how do you find thatyou don't have that data
and you don't have that pattern?
I want to talk about first though,
the multisensory approach
that you brought in your introduction.

(04:23):
How does this addresssome of those challenges
that you just mentioned
and bring more of a, you know, natural,
I guess, human inspectionto predictive maintenance,
human-like inspection?
- Well, it doesn'tinvolve human inspection.
First of all, as we saw, youdon't have any data, right,

(04:46):
for predicting how theproduct will break down.
Well, very often with new products,
you might have a meantimebetween failures of say 10 years.
That means you have to wait 10 years
before you actuallyknow how or when or why
or how it'll break down.
So you don't have any data,

(05:06):
which means you cannotdo any deep learning.
So what are the alternatives?
We have developed acapability called intuitive AI
which uses some of the otheraspects of how humans think.
Artificial intelligenceis all about emulating

(05:27):
human intelligence.
And humans don't just usetheir memory function,
which is essentially
what deep learning attempts to replicate.
Humans also use their logic function.
They have deductive logic,
inductive logic,
they use intuition andcreative capabilities and so on

(05:50):
to make decisions on how the world works.
So it's very differentto the way you'd expect
a machine learning system to work.
So what we do is we use our abilities
as a human to advise thesystem on what to look for.

(06:16):
And then we use ourmultisensory capabilities
to look for those symptoms.
For instance, just as an example,
if a conveyor belt has been put in place,
has been installed,
and we want to know if itis about to break down,
what would you look for
to predict that it's not working well?

(06:38):
You might listen to itssound, for instance,
you might know that when it starts going
clang, clang, clang, thatsomething's wrong in it.
So we can use our abilityto see the object,
to hear it, to smell it,
to tell us how it'soperating at any given time

(06:59):
and whether it's showingany of the symptoms
that you'd expect it to show
when it's about to break down.
- That's amazing.
And of course there's no humans involved,
but you're adding thehuman-like elements into it,
say that somebody manuallyinspecting would look for,
if anything's smoking,if they smell anything,

(07:20):
if they hear any abnormal noises.
So how do you train AI
to be able to provide this interactive
or be able to detect these capabilities
when it is just artificial intelligence
or a sensor on top of a system?
- Exactly how you said you do it.
You tell the system whatyou're likely to see.

(07:44):
For instance, let's say you'relooking at some equipment,
and the most likely scenariois that it's likely to rust.
And if it rusts
there's a propensity for it to break down.
You then tell your system to look for rust
and over time it'll lookfor the changes in color.

(08:06):
And if the system sees rust developing,
it'll start telling you that
there's something wrongwith this equipment.
It's time you looked at replacing it
or repairing it or whatever.
- Great. Now I want to goback to training the AI
and the data sets,
like we talked about how do you do this

(08:28):
for a new equipment?
I think there's a misconception
or a lot of providers out there
that need to do that extensive training
that takes a long time,
they need that data touncover these patterns
to learn from them, toidentify these abnormalities.
So how is your solution
or your company able to dothis with less data sets

(08:48):
but ensure that it iss accurateand it does provide value
and benefits to end user or organization?
- Well as I said, the traditional approach
is to do deep learningand machine learning,
which requires massive data sets
and you just don't have themin some practical situation.
So you have to use othermethods of human thinking

(09:12):
to understand what is happening.
And these are the methodswhich we call intuitive AI.
They don't requiremassive amounts of data.
We can train our systemwith something like,
maybe 10 examples of thedataset or even less.
And because you require so few data sets,

(09:35):
you don't need massiveamounts of computing,
you don't need GPUs.
And so everything we do
is done with very littletraining, with no GPUs.
We work purely on the standard intel CPUs
and we can still achieve accuracy.
Let me give you an example of what I mean

(09:55):
by achieving accuracy.
We recently implemented a system
for a driverless train system.
They wanted to make sure that
nobody walked in front of the train
because you know, obviouslyit's a driverless train
and you have to stop it,
and that requires just asimple intrusion system.

(10:18):
And there are hundreds ofcompanies who do intrusion,
in fact, camera companiesprovide intrusion systems
as part of their...
you know, embedded into their cameras.
And so the railwaycompany we were talking to
actually did that.
They bought some cameras
from a very reputable camera company

(10:39):
and they could do the intrusion,
the intrusion detection.
The only problem they had
was they were getting something like
200 false alarms per camera per day,
which made the whole system unusable.
Then finally they set the criteria
that they want no morethan one false alarm
across the entire network.

(11:00):
And they found us and they brought us in
and we could achieve them.
And in fact, with thatparticular train company
we've been providingthem with a safety system
for their trains for the last five years.
So you can see that the techniques we use
actually provide youwith very high accuracy,

(11:21):
much higher than you can get
with some of these traditional approaches.
In fact, with deep learningyou have the significant issue
that it has to keep learningcontinuously almost forever.
For instance, you knowthe example I gave you
of detecting dogs and recognizing dogs.
You have 50,000 dogs,you train your system,
you recognize the nextdog that comes along,

(11:42):
but if you haven't trained your system
on a particular type, unique type of dog,
then the system may not recognize the dog
and you have to retrain the system,
and this type of traininggoes on and on and on.
It can be a forever training.
You don't necessarily require that
in an intuitive AI system,

(12:04):
which is type of technologywe are talking about.
- Yeah, I could see thistechnology being useful
in other scenarios too,
rather than just likedifferent types of dogs.
I know sometimes equipmentmoves around on a shop floor
or you know, things change,
and if you move camera and positioning,
usually you have toretrain the AI from there,

(12:25):
because of that relationshiphas been changed.
So it sounds like that's something that
it would be able to continueto provide the results
without having to be completely retrained
if you move things around.
In that railroad example that you gave,
you mentioned how they installed cameras
to do some of the thingsthat they were looking to do.

(12:45):
But if the, I know a lot of times
like manufacturers shopsand the railroad systems,
they have their cameras,
they're monitoring forsafety and other things.
Now, if they wanted tobe able to take advantage
of your capabilities
on top of their alreadyexisting infrastructure,
is that something thatthey would be able to do
or does it require the installation
of new hardware and devices?

(13:07):
- Well, in that example of the railway,
we use the existing cameras
that they had put in in the first place.
We can work with anybody's cameras,
anybody's microphones,
of course the cameras are the eyes,
we are only the brain.
So the cameras have to be ableto see what you want to see.
So we provide the intelligence
and we can work with existinginfrastructure for video,
for sound, for smell.

(13:28):
Smell is a very unique capability.
Nobody makes the type of smell sensors
that are required to actuallysmell industrial smells.
So we have built our own e-Nose,
which which we provide our customers with.
It's a unique device withsomething like six sensors in it.

(13:51):
You do get sensors in the market of course
for single molecules.
So if you wanted todetect carbon monoxide,
you can get a sense of carbon monoxide,
but most industrial chemicalsare much more complex.
For instance, even a cup ofcoffee has something like
400 different molecules in it.
And so to understand thatthis is coffee and not tea,

(14:13):
you need a sensor of ofthe type of our e-Nose
which has multiple sensors in it
and understanding the pattern
that is generated acrossall those sensors.
We know that it is this particular product
rather than something else.
- So I'm curious,

(14:33):
I know we talked aboutthe railroad example,
but since your technology
spans across all differenttypes of industries,
do you have any other use cases
or customer examples thatyou can share with us?
- Of course. You know, we havesomething like 300 use cases
that we've implemented across30 different industries

(14:54):
and if you just look atpredictive maintenance,
it could be a conveyor belt as I said
that is likely to break down
and you can understand
whether it's going to breakdown based on its sound.
It might be a rubberbelt used in an elevator,
it might be products that might rust

(15:15):
and you can detect the level of rusting
just by watching it, bylooking at it using a camera.
You can use smell,
you can use all these different senses
to understand what is thecurrent state of that product.
And in terms of examplesacross different industries,

(15:37):
I'll give you one which demonstrates
the real value of a system likethis in terms of its speed.
Because you are notlabeling 50,000 objects,
you can actually implementthe system very quickly.
We were invited into an airport

(15:59):
to detect problems in their refuse rooms.
Refuse rooms are the garbage rooms
that they have under the airport.
And this particular airporthad 30 or 40 of them
where the garbage from the airport
and from the planes thatland over there and so on,
it's all collected over there.
And of course when the garbage bags break
and the bins overflow,

(16:19):
you can have all sorts of other problems
in those refuse rooms.
So they wanted to keepthese neat and tidy.
And to make sure thatthey were neat and tidy,
they decided to use
artificial intelligencesystems to do that.
And they invited, I thinkit was about eight companies

(16:41):
to come in and do POCs overthere, proofs of concept.
Now they said, "Take four weeks,
train your system, andshow us what you can do."
And after four weeksnobody could do anything.
So they said, "Take eight weeks."

(17:01):
Then they said, "Take 12 weeks
and show us what you can do."
And none of those companies
could actually produce a system
that had any level of accuracy
just because of the numberof variables involved.
There are so many different things
that can go wrong inthat sort of environment.

(17:24):
And then finally theyfound us and they asked us,
"Can you come and showus what you can do?"
So we went, sent in one of our engineers
on a Tuesday afternoon,
and on that Thursday morning
we were able to demonstrate the system
with something like 100% accuracy.

(17:46):
That is how fast thesystem can be implemented
because you don't have to go through
50,000 sets of datathat you have to train.
You don't need massiveamounts of computing,
you don't need GPUs.
And that's the beauty of intuitive AI.
- Yeah, that's great.

(18:07):
And you mentioned you'realso using Intel CPUs,
I should mention insight.tech
and the "insight.tech Talk"we are sponsored by Intel.
So I'm curious, how do you work with Intel
and the value of that partnership
and the technology in makingsome of these use cases
and solutions successful?

(18:27):
- Being a partner of Intelfor the last 23 years,
and so we work exclusively with Intel.
We've had a very close
and meaningful relationshipwith them over these years,
and we find that theequipment that they generate
has benefit in that.

(18:48):
It is, we can trust it,we know it'll always work,
we understand how it works.
It's always backward compatible,
which is important for us
because customers buyproducts for the long term,
and because it delivers what we require,

(19:10):
we do not need to use
anybody else's GPUs and so on.
- Yeah, that's great.
And I'm sure they're always
staying on top of the latest innovation,
so it allows you to scale
and provides that flexibility
as multisensory AI continues to evolve.
So since you said in the beginning

(19:32):
you guys started with AIbefore it was fashionable.
I'm curious, how has it evolved,
this idea of multisensory intuitive AI,
how has it evolved since you've started
and where do you think it still has to go
and how will the companybe a part of that future?
- Well, it's been a very long journey.
When we first started, wefocused on trying to do things

(19:53):
that were different towhat everybody else did.
There were a lot of people whoused standard video analysis,
video motion detectionand things like that
to understand the environment.
And we developed technologies
that worked in very difficult,
crowded and complex scenes
that positioned us well in the market.

(20:16):
Today we can do much more than that.
We can, you know, we do face recognition,
number plate recognition,
it's all privacy protected.
As I said, we do video, soundand smell based systems.
Where are we going?
The technology keeps evolving
and we try and stay at theforefront of that technology.

(20:38):
For instance, in the past,
all such analytics requiredthe sensor to be stationary.
For instance, if you had a camera
it had to be stuck on apole or or a wall somewhere.
But what happens when thecamera itself is moving,
for instance, on a body-worn camera
where the person is moving around,
or on a drone or on a robotthat's walking around.

(20:59):
So we have started evolving technologies
that'll work even on thosesorts of moving cameras,
and we call that wild AI,
it works in very complex scenes,
in moving environments
where the sensor itself is moving.
Another example is where we've started...

(21:23):
We'd initially developedour smell technology
for industrial applications,
for things like waste management plants,
for things like airport toilets.
They clean the toilet every four hours,
but it might startsmelling after 20 minutes.
So the toilet itself cansay, "Hey, I'm toilet 6

(21:44):
come back and clean me again."
It can be used in hospitals
where a person might be incontinent
and you can say to the nurse,
"Please go and helpthe patient in room 24,
replace the smelling." And so on.
It can be used for industrial applications
of a number of types.
But we also discovered that

(22:06):
we could use the same device
to smell the breath of a person,
and using the breath we can diagnose
early stage lung cancer and breast cancer.
Now that's not a productwe've released yet.
It is, we are goingthrough the clinical tests

(22:27):
and clinical trials
that one needs to go through
to release this as a medical device,
but that's where the future is.
It's unpredictable.
We wouldn't have imagined 20 years ago
that we'd be developingdevices for cancer detection,

(22:50):
but that's where we are going.
- It's amazing to seeand I can't wait to see
what else the company comes up with
and how you guys continue
to transform industries and the future.
I want to thank you Rustom again
for coming onto the podcast.
It's been a great conversation
and thanks to our listeners.
I invite all of our listeners
to follow us along on insight.tech

(23:11):
as we continue to coverpartners like iOmniscient
and what they're doing in this space,
as well as follow along withiOmniscient on their website
and their social media accounts
so that you can see,
be a part of some of these technologies
and evolutions that are happening.
So thank you all again
and until next time, thishas been "insight.tech Talk."
(lively music)
Advertise With Us

Popular Podcasts

United States of Kennedy
Stuff You Should Know

Stuff You Should Know

If you've ever wanted to know about champagne, satanism, the Stonewall Uprising, chaos theory, LSD, El Nino, true crime and Rosa Parks, then look no further. Josh and Chuck have you covered.

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.