All Episodes

October 31, 2023 35 mins

For many years, Intel has been developing AI technologies that can empower people with disabilities, and improve accessibility in our modern world. Disabilities come in all forms, and as we develop technology and tools, the taboo associated with such disabilities lessen over time. Muteness, for example, evolved into the rich sign languages used all over the world but with the adoption of AI, those who struggle to speak have even more options for communicating. Discover how AI is breaking down barriers, enhancing mobility, and promoting inclusivity. Join the conversation with XIMERA LLC co-founder, Jagadish Mahendran, and AI Evangelist, Lama Nachman as they dive into the many ways AI is making a meaningful difference in the lives of those with disabilities.

 

Learn more about how Intel is leading the charge in the AI Revolution at Intel.com/AIperformance

 

***The voice cloning feature was developed by Klassic Studios.

See omnystudio.com/listener for privacy information.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:03):
If you are hearing the sound of my voice, then
you are not actually hearing my voice at all. What
I mean is that the voice you are hearing is
actually an assistive text of voice cloning tool that my
company created, and it is completely powered by AI. There
are many different types of AI tools to help people
who are differently abled. For me, it has restored my voice,

(00:23):
but there is so much more it can do. I'm
excited to see how it grows. Hey, there, I'm grain
class and this is technically speaking an Intel podcast. The
show is dedicated to highlighting the way technology is revolutionizing
the way we live, work and move. In every episode,
we'll connect with innovators in areas like artificial intelligence to

(00:46):
better understand the human centered technology they've developed. As early
as the discovery of fire and the invention of the wheel,
technology has always been an innovation to improve people's lives. However,
sometimes leaders and technology unintentionally exclude those who may deal
with uncommon issues such as physical immobility, neurodivergence, visual impairments,

(01:07):
or even old age. While governments usually put systems in
place to acknowledge and care for these communities, it has
been the role of technology to create advancements necessary for
those dealing with disabilities to thrive just as much as
they abled counterparts. With the revelation of artificial intelligence, there
are many new advancements that are providing accessibility to these
communities in the ways we never thought possible until now.

(01:31):
And I have two experts with me who are leading
the charge to more accessible future. Lama and Ackmann is
a visionary leader at the intersection of technology and human experience.
With a distinguished career spanning academia and industry, Lama has
consistently pushed the boundaries of technology to enhance our daily
lives and redefine the way we interact with computers. Her

(01:53):
innovative work has not only advanced the field of AI,
but has also paved the way for more intuitive and
human machine interfaces. As an Intel Fellow and director of
its Human and AI Systems Research Lab, she leads the
team defining and executing the research for contextually aware and
personalized computing, developing sensing systems, algorithms, and applications to make

(02:15):
it all possible.

Speaker 2 (02:17):
Welcome Lama, thank you very nice to be here.

Speaker 1 (02:20):
We're also joined by Jagadesh Mahendram, a visionary entrepreneur and
tech innovator who has made significant contributions to the fields
of artificial intelligence, renewable energy, and sustainable development. With a
relentless passion for cutting edge technology to address global challenges,
Jakodesh has emerged as a driving force in shaping a
more sustainable and inconnected world. Most recently, he joined Camera

(02:44):
LLC with his founding partners and a team of visually
impaired volunteers. He uses AI to develop solutions and assistance
for those dealing with site lost and low visibility. Welcome, Jaggedesh,
Thank you very much. It's so honor to be here. Okay,
I've just start with Lama. Lama, how did you get

(03:05):
your start in tech and AI?

Speaker 2 (03:08):
And I would say that I've been in love with
tech probably since I was like two years old, you know.
I've always been into kind of like the latest and
the greatest technology growing up. But then after I graduated
from UW Madison, I actually joined Intel out of college

(03:29):
and then I worked there for a while. I went
out and did a few startups and then came back
to Intel specifically focused on that intersection of sensing and
understanding the world through that to create really compelling technology.
So that's been kind of like almost like a very
long career progression that brought me back to what I

(03:50):
was excited about.

Speaker 1 (03:51):
Excellent, and then in terms of the AI component, how
did you start to get involved in that?

Speaker 2 (03:57):
Yeah, so early on when I went back to Intel,
actually in two thousand and three, I started to look at,
you know, how do we make sense out of the
world around us? So to be able to understand a
lot of that sensor data that we were processing, whether
it's vision or audio or tex or whatever, right, that
really required work and AI to actually make sense out

(04:18):
of that data. So that's where it kind of started
around two thousand and four, and then since then it's
kind of looking at different ways of intersecting AI and
HCI to actually bring about really compelling experiences for users
and helping them perform all sorts of things in their lives.

Speaker 1 (04:37):
Excellent and Jackets, how did you get your start in technology?

Speaker 3 (04:43):
I have a very different story here. I was not
interested in technology or all. Actually wanted to become a doctor,
but you know, it's very competitive in India, so I
didn't really get good ranking, so I couldn't join the
college that I wanted to join, and the second option
was engineering, and I chose to do the computer science.

(05:03):
I think the very best turn out to me is
good that we're enjoying artificial intelligence and more than doctor.
I think I'm a bettery engineer than a doctor.

Speaker 1 (05:16):
Oh that kind of makes sense with your I guess
love of medicine and the type of projects that you've
come up with. So we'll talk about that in a
little bit. But one thing I'll go back to Lama
in terms of AI improving the human experience you've mentioned HI.
First of all, if you can define for the audience
what HDI is and also how do you address solutions

(05:38):
and maybe you could educate me around what's the difference
between accessibility and accommodation when you're designing a system.

Speaker 2 (05:44):
Yeah, so, first of all, HDI is a human computer interaction,
So it's really trying to understand how would people directly
interact with technology. And sometimes that technology is something that's
actually physical that you're picking something on a computer or whatever.
But a lot of times, you know, some of the
work that we really work on is embedding it into

(06:06):
the environment so that it almost becomes like invisible. And
that's kind of one of the most interesting things is
like to really architect for interactions of things that are invisible. Honestly,
if you think about any technology that you're developing, you
have to think about how you're making it accessible, how
the interfaces are accessible, how different people with different types
of disabilities and abilities can actually interact with your technology

(06:31):
throughout like the development cycle. In some sense, part of
what I've really been focused on is creating technologies for
people who are severely disabled, where you really need very
different ways of interacting with the technology to enable that
to happen. Really, that focus specifically on the work with

(06:54):
ACAT and the workforce even Hawking, has really been about
how do you get around these on strengths to enable
people to access the technologies just like all of us.

Speaker 1 (07:06):
Lama mentions Intel's ACAT, which stands for Assistive Context Aware talkit.
This technology was key in enabling Stephen Hawking's ability to
continue to communicate and inspire people around the world. Listening
to Lama speak about the human computer interaction processes, she
sounds less like a tech person and more like an anthropologist.

(07:27):
We often think of data and algorithms as being this
cold and personal assessment of people. But Lama has such
a passion for her programs, it makes me wonder just
how impactful that passion is to the way AI tools
interpret how to assist us. While she has spent so
much time learning how to program and manage computers, it
seems her real passion is in trying to understand humanity.

Speaker 2 (07:52):
My passion has really been focused on how do we
bring more equity with technology. The work towards specifically extreme
disability really came about from my interaction with Professor Hawking.
So before that, a lot of the work that I
had focused on in terms of accessibility is really bridging

(08:13):
where people's needs were as they're doing different aspects of
their lives. Right you're driving, for example, how can that
be contextually aware so it can help support you without
assuming that you have all of your abilities there. But
once I started working with Professor Stephen Hawking, it became
very obvious to me that to bridge that extreme disability

(08:36):
you really have to think very differently about how technology
comes in. And that's what really got me excited about
that work.

Speaker 1 (08:43):
Okay, and in terms of the involvement you had with
Professor Hawking's technology to help him interact with the world,
What were the areas that you looked at?

Speaker 2 (08:53):
The lab I lead is actually a multi disciplinary lab, right,
so we bring social science, design, and AI together. So
the first place you start is we needed to understand
how Stephen interacts with the world, what he is trying
to accomplish, and where are his bottomnecks in terms of
being able to do that with existing technology that he
was using. So there was a lot of observation to

(09:14):
try to understand how do we define the problem and
from there for people who are not aware of this, right,
Professor Hawking really didn't have an ability to speak, and
he didn't really have an ability to move, so he
couldn't really utilize many of the technologies that are available.
You couldn't do, for example, ASR where he could speak

(09:35):
and then the computer could be controlled by speech, Nor
could he type because he had no control over his hands.
So then we started to basically look at if we
really had a very very tiny signal, and in this
specific case for Professor Hawking, it was actually his ability
to move his cheek. Can we get access to that

(09:55):
one signal and then turn that into a complete access
for his whole machine? And then we went onto that
path of essentially building a software platform and a sensing
subsystem that allowed for that to happen. All he can
do is confirm something with the movement of a cheek,
and now he can type, he can email, he can

(10:17):
serve the web, he can give lectures, he could do all.

Speaker 1 (10:19):
Of that in What year was that that you were
working on?

Speaker 2 (10:24):
So we started our interaction with Stephen and twenty eleven
and it kind of lasted throughout his life until he
passed away, which was twenty eighteen. So we after a
couple of years we were able to put together a
system that he could use that he could switch to,
and then over the years we just continued to enhance
it and add more capabilities. We open sourced it so

(10:45):
that we could take it into the world.

Speaker 1 (10:46):
And yeah, that was my next question in terms of
the technology that was developed. Have you seen it applied
more broadly to others.

Speaker 2 (10:54):
Yeah, And initially we were hoping that we could find
some technology out there we could take and modify slightly
so that he could use it. And then after being
proven wrong, we then went onto this path to go
and develop something from scratch. But from the get go
our goal was to develop it so that it could
support a lot of different users and be a platform

(11:17):
for developers to build on top of, because we realized
that there was that gap in what existed out there
in the open world. And Stephen was a huge contributor
to this project, right he he you know, he was
a designer, he was a validator, he was you know,
he gave a lot of his insights. So throughout all
of this he was really focused on ensuring that that

(11:39):
actually went to open source because you know, people reached
out to him all the time because he was, you know,
an own figure with that extreme disability, and everybody was
asking him, like, what technology is available to us to
actually use. So he's been like really focused quite a
bit on making that available to the world.

Speaker 1 (11:59):
You can really sense how dedicated Lama is to helping
those with disabilities communicate with others. However, talking is just
one way we communicate, and moreover, there are a combination
of ways that we engage and interact with our environments.
As a way to help people that may struggle with
another sense is our other guest Jagged Dish originally designing

(12:21):
a backpack that uses AI to help guide the blind.
His project expanded into really dissecting what it means to
be visually impaired. Lama and Jagged Dish both have different
approaches to their missions. The work compliments each other so well.
Jagsh I'd like to get you into conversation now, in

(12:42):
particular the AI powered backpack that has been developed by
yourself and others. Can you just tell me a little
bit about I guess the genesis of that idea.

Speaker 3 (12:53):
I've always wanted to do something using the technology that
can help the society in what way or the other.
And when it came to Masters in twenty thirteen, one
of the first things that occurred to me was like,
you know, we should use EI and use a bunch
of sensors to help the usually impaired see the world,

(13:13):
like how sighted people see. And one of the primary
visions that I used to occur to me was when
somebody is standing in public places like buz stop, there
should be a solution in such a way that the
person who is blind should get totally unnoticed. And around
that time the technology was not as good as how

(13:34):
it is now. The real inspiration occurred to me when
I met my friend. The day when I met her,
she had a black mark in her face, and I
was like, you know, what happened to your face? And
she's usually impair and she was saying, as she was
walking outside in the sidewalk, she ran into a tree
branch and then that left a mark in her face.

(13:56):
And that was such an ironical for me because by
then I was already a perception engineer, teaching robots to
see things, you know, do complex us. But at the
same time, there are so many people who cannot see
right and that sort of spart my desire to work

(14:17):
on this project sooner than later. Around the same time,
this competition of Open Sea Special ai I was going on,
sponsored by Intel, and I submitted this idea and the
project ended up winning the first price. And this friend
has been helping you throughout how to develop a system
that is more user friendly and actually solves important use cases.

(14:39):
And through this competition received a lot of attention, and
this is when we started to think, you know, we
should probably you know, get incorporated and try to create
a full fledged open source system so that anybody in
the world can use it and help in improving the
lives of the visually impaired. Currently, we are supported by

(15:00):
Intel's irt A program Intel Rise Technological Initiative program, and
we are receiving an in collaboration with Accenture. Through this partnership,
you're able to gain a lot of support both on
the technical and non technical side. And soon we will
be releasing our improved version of the system, which we
call as Phoenix in a few months.

Speaker 1 (15:21):
Okay, excellent, looking forward to it. And can you tell
me what I've seen a little bit of a video
on it where you've got a backpack. Maybe you could
just describe some of the main system elements.

Speaker 3 (15:31):
Yeah, the physical system mainly consists of a backpack that
has Intel Look with a couple of new compute sticks
and this is the sort of the compute resource. And
at the front we have a camera. It's Obi camera
that is is put in the front and connect it
to the system behind and whatever this sensor collects the data.

(15:53):
We run some AA processing behind using deep learning techniques
and the system will infer useful information about the environment
and update the user such as where are the obstacles
and what are the common objects seen in the scenario,
what are the moving objects, what are the traffic conditions?
And more similar features for communicating there is audio interface

(16:16):
through bluetooth headphones, and we're also working on a haptic
band to communicate the same sort of information in form
of vibrations through tacktail information.

Speaker 1 (16:28):
Lama was just wondering if you had any comments or
thoughts on this AI backpack.

Speaker 2 (16:34):
I mean, it's a fantastic idea, and you know, if
you think about what is actually now possible with perception
and AI, I mean, it's it's just kind of like
the most natural thing to do to actually empower users
with such a capability that are vision impaired. I was
actually also quite intrigued by the haptic aspect of what

(16:56):
you mentioned, and I think it's something that tends to
be un they're utilized, but really kind of a natural
thing for this type of application, especially if you're trying
to kind of guide somebody in a direction. So I
was wondering maybe you can say a few words about that.
I was really intrigued by that.

Speaker 3 (17:13):
Yeah, So if the first prototype contained mainly the audio interface,
all the information is actually shared via the wireless headphones
and not all the users prefer that main reason is
because they usually impair people rely on audio cues when
they're wearing earphones. We are sort of blocking a lot

(17:34):
of environmental cues, which is why we wanted to introduce
another modality for user interface, which is haptic bands. Basically,
using a combination of motors and vibration patterns, we can
communicate tons of information just using few motors, like even
less than ten morrors. And the current prototype that we're

(17:57):
working on is pretty simple version. It can be put
on the wrist and this can communicate potentially hundreds of
combinations of vibrations, and at some point we're really aiming
for a set of where we can communicate pretty much
everything the system sees through the happic vibrations. If a

(18:18):
user prefers completely one hundred percent haptic bands, that is
something that we are targeting for. At the same time,
some users might prefer, you know what, I want this
sort of information to be communicated via audio and some
sort of information with the haptics. We're also are working
with the combination of system as well, but having hapic

(18:39):
bands in a solution like this opens a sort of
a different dimension for the users here, especially we're visually impaired.

Speaker 1 (18:48):
What Jagadish hints at with his explanation of haptic bands
versus audio interface is a very fascinating, multi pronged approach
to the solution. In technology terms, haptics is all about
how your device interacts with you through touch. Think of
the times when your phone vibrates in your pocket, or
when you play a video game and the controller shakes
in response to you taking damage from the endgame boss. Oftentimes,

(19:12):
in developing solutions for disabilities, there is a one size
fits all approach that seeks to do an adequate job
for the most number of people. This strategy fails to
take into account the nuances of the human experience in
the same way that some people are audio learners while
others are visual. When it comes to aiding someone with
a disability, it is important to consider what methods complement

(19:34):
their strengths and experiences. The beauty of how jagger Dish
seeks to develop this AI tool is that it is
constantly studying and creating more specialized options for the users,
from audio to haptics. It has the potential to grow
in a number of ways to accommodate the visually impaired
ways we never thought to supplement them, and maybe these

(19:55):
developments will even have an impact on those with perfect sight.
Leans into the human computer interface component that Lama mentioned earlier,
the constant study and assessment of how people will actually
use the tools. Given you're listening to technically Speaking an
Intel podcast will be right back. Welcome back to technically

(20:27):
speaking an Intel podcast. I'd just like to get more
broadly into Intel and it's AI efforts now, Jaged you
have a partnership with Intel. You've mentioned it before, how
you're working with Intel and how they've come to the party,
so to speak. What's it like working with their team

(20:49):
in terms of their support and assistance they've given you.
It's fantastic. The amount of exposure and the support that
we've received from Intel is really amazing. What we admire
about Intel is how open they are in developing the
solutions for accessibility, and they have a dedicated team who's
purely working on solutions like this. And we also got

(21:10):
opportunity to look into the projects that Lama's team has
been working on. They're simply superb. I think these solutions
like this are transformative and it's going to change lives
for people. And in terms of the support that we've received,
they have been helping us on many aspects all the
way from helping with putting up a process training the

(21:35):
model assets, you know, creating a platform for training models,
and also sharing the connections, and also with funding. So
a lot of the features that is going to come
as part of Phoenix is coming out of the IrDA project,
and the sort of feedbacks that would risk in improving
the solution is something that we don't get outset easily.

Speaker 2 (21:59):
Yeah, and it's one of the things that I believe
we're really all about at Intel, right If you look
at our mission, it's really enriched the life of every
person on the planet, and every person, right, not just
able people. So it's really wonderful that you're seeing that
support and the diversity of the type of platforms and
solutions that we have, right. So I'm really just very

(22:20):
heartened by what you said, Drakhandesh. One of the things
maybe that is a top of mind project something called Omnibridge,
and Omnibridge is essentially a software that is meant to
bridge again the silence gap, but for people who are
hearing impaired, so that you know, essentially you're translating in

(22:40):
and out of like sign language, so you know, people
can sign into their PC and then the PC can
actually translate that into language on the other end, and
then vice versa. Right, So it's like, you know, what
you're really enabling again is to enable people in their
everyday life life to actually be able to do that.

(23:01):
And to be able to do that, you need a
lot of the AI support and AI compute on these platforms.
So one of the reasons again what I was saying,
it's really nice to see it at these platforms and
at the lowest cost that you can actually bring it.
You start to really democratize AI in ways that really
improve people's lives.

Speaker 1 (23:19):
Yeah, for me, I mean one of the key things
you've just said is democratizing technology, and I think that's
the real power of it is. Yes, we can have
those really fancy solutions that Professor Hawking has, but for me,
it's about trying to get that cost down so that
it makes it so much easier for people to use.

Speaker 2 (23:41):
So actually, just as a correction, so Professor Hawking didn't
have a fancy system, I was actually at PC with
a very lightweight sensor and in fact, like a big
part of what we've been really trying to do with
BCI is also democratize that because the problem with BCI
is if you want something with really high fidelity, you're
paying fifteen to two thousand dollars on a headset versus

(24:02):
what we're really trying to do is like use OpenBCI,
so it's you know, a really low cost, but you know,
compensate for the fidelity constraints with a lot of machine learning.

Speaker 1 (24:12):
Okay, great, and jacket ashly did say it was relatively
low cost. Is that one of the primary motivating factors
for you? And how do you go about designing systems
to try and get that cost down.

Speaker 3 (24:28):
Absolutely, it's a major restricting factor. Just a bit of context,
The unemployment rate in the visually impaired people community is
extremely high. I think it's more than sixty percent, so
it's hard for them to afford any product that is expensive.
And this is something that we want to change by
one making it completely open source, so that anybody in

(24:51):
the world they can just if they have the technical skills,
they can just assemble the system and they can get
the system. If not, we can help them assemble the system.
The complete solution is going to be open source. Two
is building the product using the hardware systems that are cheap.
At the same time, that are efficient, and that's where

(25:13):
products like Intelook stands apart one because it has very
good capability for running a lot of models in bartle
and also using accelerators like neural computer stick. So things
like that help us in shrinking the form factor and
also the cost quite a bit. And at the same time,
at the software design level, if you are putting in

(25:34):
a modular based design, where if somebody wants to use
a cheaper sensor, they could plug in a different sensor.
The rest of the robotics or stack will remain intact
as far as they take care of the sensor obstruction layer.
And same thing goes for probably for haptic interface, probably
audio interface, and potentially for computer interface. So we want

(25:56):
to modularize it as much as possible and shrink the
cost as much as possible.

Speaker 1 (26:02):
Jagadish mentions something I had never really considered, which is
the difficulty in finding gainful employment for those with visual impairments.
In the US and other developed nations, there are protocols
to provide reasonable accommodations to workers with disabilities, but globally
that has yet to become a common practice. With an
AI tool such as Jagadees being open source. It really

(26:26):
helps move the needle in terms of what those with
visual impairments can do for themselves. Lama also mentions BCI,
or brain computer interfaces. Most brain computer interfaces use electrical
energy of the brain to directly interface with computers or machines.
The best way to imagine BCI is the character Cyborg
from the Teen Titan series, where he developed superpowers from

(26:48):
interfacing computing technology with his biological self. I'm wondering what
accommodations are considered when the user has ADHD or some
other form of cognitive processing disorder.

Speaker 2 (27:01):
So we've been looking specifically at utilizing BCI for communication
for locked in patients, right, And really, you don't want
to use BCI for communication unless you have to, because
it's not i mean, unless you're actually have something that's
implanted in your brain. If you're going outside of the skull,

(27:21):
you have a very very noisy signal. So that's in
some sense you can think of it as a last resort. However,
what you just mentioned is something very different, right, which
is utilizing BCI as another sensing modality for all sorts
of other inferences, not to communicate your intention, but to
actually understand your state, and that is something that is

(27:44):
you know, yes, can be totally utilized for understanding, for example,
things like emotional state and concentration and focus and all
sorts of things like that that can help in cases
where you have people with autism, for example, and they're
having a hard time expressing emotional state as it's actually

(28:04):
getting worse.

Speaker 1 (28:05):
Right.

Speaker 2 (28:05):
There has been actually quite a lot of interesting research
out of Jojia Tech, for example, specifically looking at that
as an interesting modality for these type of settings.

Speaker 1 (28:15):
In terms of other individuals and organizations contributing, you've mentioned
both of you mentioned the open source initiative that Intel's pushing.
If individuals and organizations want to be involved, what's the
best way for them to get in and start contributing.

Speaker 2 (28:33):
So basically with ACAT, we have essentially it's an open
source project, right, and it's open to developers. We have
different people contributing all sorts of different things, right. I mean,
for example, we've seen a lot of interest in having
ACAT beyond in different languages for people around the world, Right,
So we have a way for having people easily contribute

(28:55):
to extend it to other languages. As an example, extending
it to other sensing modalities and so on, so you
can go through that project and then just kind of
communicate and submit what you want and communicate with us
as the people who are still kind of overseeing the project.
There are also like specific groups that we work with
because we're trying to also kind of get access to

(29:16):
users that we can test that technology with. So for example,
you know the M and D or the ALS groups
and things like that. So depending on usually some of
these groups have access to a lot of the different
solutions and open source systems that exist there. So that
also is a way I mean not necessarily just for ACAT,
but more broadly.

Speaker 3 (29:35):
We are seeing the very strong trend of a lot
of projects being open sourced, and because of this trend,
we're seeing a lot of powerful projects being democratized and
reaching people much easily than before. In fact, a lot
of companies are actually following this model, starting to switch
from a different model to open sourcing model, which is

(29:56):
fantastic for the community. It's just fantastic for the world. However,
there are certain things needs to be considered when developing
open source solution. One of the most important things is
how an open source project is defined, how can it
evolve by itself at some point. Initiatively there is going
to be primary contributors, but at some point there is

(30:17):
going to be a lot of people. You're going to
get contribution from all over the world, and this can
be both good and bad. If the response is very high,
then the initial contributors cannot handle it, right, it might
end up pretty damaging, right, But at the same time,
you need those responses, So it's important to know that
balance and also come up with how do we address

(30:39):
this as a process in general?

Speaker 1 (30:41):
Right?

Speaker 3 (30:41):
How can somebody contribue, but how can somebody create a
pr It's going to be completely democratized and there will
be more reviewers distributed throughout the world.

Speaker 2 (30:49):
One of the things that I'm really happy to see
is really the amount of contribution in the open source
on all sorts of AI capabilities and language models, and
which really I think is enabling a lot of democratization
of AI, specifically to all of these different usages. Right,

(31:12):
because if you think about a sist of computing in
some sense, in many cases you're trying to compensate for
some sort of a sense impairment. Right, So if you're
able to actually use AI to help extract that sense
automatically from the world. Having access to that democratization in
AI models and algorithms is something that is really transformational

(31:33):
for this space. And if I remember, for example, like
in the past, right even getting something access to something
like an ASR was really hard to do right in
the open source, at least the level of quality that
you would see. But now lately, because of that quick movement,
you're seeing a lot of capability in the open source
that actually rivals that of the really you know, big companies,

(31:55):
which is I think is absolutely transformational.

Speaker 1 (31:58):
Yeah, that's great, no question for both of you. Are
start with jaggedesh. You know, we're seeing AI being used
for accessibility efforts. Looking forward ten years, what's the number
one area which you would want AI to help in
this industry.

Speaker 3 (32:14):
I'd be really pleased to see a system that is
really small that somebody can put in like a glass
or any firm, that goes totally unnoticed and it provides
all the capabilities of human eye. I think that'll be fantastic,
And same thing goes for other forms of disabilities. I
think that will be fantastic to see and in tenuous timeline,

(32:35):
I think it might be possible.

Speaker 2 (32:38):
My number one area that I want to see solved,
not necessarily in a sist of computing, but actually climate change.
That's where I think, like we all need this otherwise
I'm not sure we're going to have a world to
actually do anything else. And in the area of asis
of computing, it's really what I was saying earlier, which
is I envision being able to compensate for every single

(32:58):
sense that the human is missing, and that, to Jaggediesh's point,
is only going to be possible if that is meeting
people where there are in the world, which means they
have to be sustainable, they have to be extremely power efficient,
they need to be robust enough to everything that it
hasn't seen in the world right, so, which is really
not necessarily where things are today, but you know, given

(33:19):
that appid improvement, I would really hope that that's where
we would be in ten years from now.

Speaker 1 (33:24):
Excellent. Okay, thank you very much, thank you, thank you.
I would like to thank my guests Jaggedish Mahindra and
Lama Nachman for joining me on this episode of Technically Speaking,
an Intel podcast. I really enjoyed this conversation with Jagged
Ash and Lama. I love being able to delve into
the motivations of the why, but also the how. You

(33:46):
heard from Jaggedesh and the story of his visually impaired
friend being struck by a tree branch, and that was
the seed for his idea for an AI assistant backpack.
For me, this is the true technological empowerment, the ability
for individuals to use their skills and talent to make
a difference, taking action rather than just talking about it.
These are the true innovators. It was great to hear

(34:08):
of Lama's work with Professor Stephen Hawking and the context
of where system her team developed. What is so pleasing
to me was that it wasn't a Rolls Royce design,
but rather an elegant yet simple system of sensors connected
to a PC to allow Professor Hawking to interact and
communicate with others. Because of this relatively inexpensive solution, it

(34:29):
can be used by a wider range of people. This
is what democratization of technology does for the world. I
hope that Lama and Jaggedish's stories inspire you to take
the leap and contribute to improving the lives of people,
regardless of their background. Please join us on Tuesday, November
fourteenth for the next episode of Technically Speaking, an Intel podcast.

(34:54):
Technically Speaking was produced by Ruby Studios from iHeartRadio in
partnership with Intel, and hosted by me Graham Class. Our
executive producer is Molly Sosher, our EP of Post Production
is James Foster, and our Supervising producer is Nikair Swinton.
This episode was edited by Sierra Spreen and written and
produced by Tyree Rush.
Advertise With Us

Host

Graeme Klass

Graeme Klass

Popular Podcasts

On Purpose with Jay Shetty

On Purpose with Jay Shetty

I’m Jay Shetty host of On Purpose the worlds #1 Mental Health podcast and I’m so grateful you found us. I started this podcast 5 years ago to invite you into conversations and workshops that are designed to help make you happier, healthier and more healed. I believe that when you (yes you) feel seen, heard and understood you’re able to deal with relationship struggles, work challenges and life’s ups and downs with more ease and grace. I interview experts, celebrities, thought leaders and athletes so that we can grow our mindset, build better habits and uncover a side of them we’ve never seen before. New episodes every Monday and Friday. Your support means the world to me and I don’t take it for granted — click the follow button and leave a review to help us spread the love with On Purpose. I can’t wait for you to listen to your first or 500th episode!

Stuff You Should Know

Stuff You Should Know

If you've ever wanted to know about champagne, satanism, the Stonewall Uprising, chaos theory, LSD, El Nino, true crime and Rosa Parks, then look no further. Josh and Chuck have you covered.

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.