All Episodes

August 8, 2024 20 mins

Christine Perey, AREA Founder, chats with VoxelSensors' Johannes Peeters, CEO, and Boris Greenberg, VP of XR Solutions about current trends in Sensors.

Sensors are examined in the Hardware section of Christine's Top 12 blog: "Top 2024 Enterprise AR Trends to Watch." Read it here: https://thearea.org/top-2024-enterprise-ar-trends-to-watch/

Sensors is the #4 trend.

 

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:04):
Hello, I'm Karen Quatromoni,
the Director of Public Relationsfor Object Management Group, OMG.
Welcome to our OMG Podcast series. At OMG,
we're known for driving industrystandards and building tech communities.
Today we're focusing on theaugmented reality for Enterprise

(00:25):
Alliance (AREA), which is an OMG program.
The area accelerates AR adoptionby creating a comprehensive
ecosystem for enterprises,providers, and research institutions.
This Q and A session will be led byChristine Perey from Perey Research and
Consulting.

(00:48):
Hello and welcome. I'm Christine Pereyand I'm here with my colleagues at
VoxelSensors. It's a pleasure to haveyou, gentlemen to join us. Remember,
this is the AR for Enterprise Alliance,
so these are people whoare attracted and working
on industrial use cases.

(01:08):
You can understand that we're nottalking about general purpose mass
market devices nor those use cases.
So thank you very muchfor joining me today.
Could you please introduce yourselves?
Sure. Hi, Christine. I'm Johannes.
I'm the CEO of VoxelSensorsand been in the industry of

(01:30):
sensing for over 15 years.
I was at another sensing companya while ago in the same 3D
sensing space that we're talkingabout in the conversation here,
and that company was,we enjoyed, let's say,
a successful exit with Sony in 2015.

(01:50):
In that meantime, I've been going around,
I was in the US for a whileworking in automotive company,
worked at arms length of softwarecompanies and sensing companies even
into an industrial smartglasses company out of Belgium
before founding or co-foundingVoxel Sensors in 2020.

(02:11):
And we're on a mission, let's say,
to make context awareapplications in ar, in vr,
in mobile space that heavily relyon all kinds of sensing paradigms
in the next couple years.
I like context aware.That's like my middle name.
That's what I just want to be contextaware all the time. Absolutely, we agree.

(02:34):
Wonderful.
My name is Boris Greenbergand I joined Voxel Sensors
something like half a year ago.
My background is mostlyarchitecture and designing systems
and four use cases in XR.
So before that I wasfounder and CTO of a company

(02:59):
called iVision, wherewe were developing gaze,
contingent projection for XR devices.
And one of the very importantthings is actually making sure
that the system operatesin a synergy. So here,
essentially responsible for themodules and the implementation of

(03:23):
the sensing, eye tracking,
understanding of the user and theenvironment in this context of
XR application.
Wow. Very,
very deep background in this sensing area.
I think when I think aboutsensors of course, camera,
microphone and then IMU,

(03:44):
which is a combination of sensors.
So tell me what do you doin addition to all that?
Or are you providingnew sensors or is it a
combination of sensors or isit the software middleware?
Tell us about what you offer.
Well actually we're fromsilicon to software.

(04:06):
So the whole stack.
The whole stack, yes,
we're working on or we developeda sensor which is unique,
different from any other sensorthat you have seen so far.
And those sensors have specificcharacteristics and the characteristics in
general, and then we'lltalk about it more.

(04:26):
The characteristics in general are froma sensing perspective that they're very,
very low in power consumption and very,
very fast in their response times,
highly accurate and share the samecharacteristics as the other sensing
modalities. And you said earlieryou love context aware things,
which is great. We love it as well,

(04:48):
but in order to make devicesthat are still friendly to
wear, let's say, and socially acceptable,even in an industrial context,
we need to be efficient. We don'thave an endless amount of power.
We don't have an endlessamount of compute. And.
The environments are complex too.

(05:08):
I think they can be darkor they can be a lot of
information. Yes.
Absolutely. And so
in our understanding,
the device itself needsto know its surroundings.
You cannot really rely onthe environment of markers
and features in there thatare specifically set up,

(05:30):
especially if you're into run into anopen world and could be a dangerous open
world in an industrial context,your device needs to be self-aware.
There. So you've been workingin these fields for a long time.
So what is it now thatmakes it possible for you
to leap forward to offer somethingthat hasn't been available?

(05:54):
Is it the manufacturing of the chips?
What are the elements? I'msure it's not just one thing,
but can you look around you and say,
what were the triggers or.
Yes.
I think one of the key aspects ofwhat is it now that is different from

(06:14):
what is realization, good experience,
that is helpful at thesame time, essentially,
if the first generation of deviceswas just excitement of, okay,
we can bring things together andgive developers these tools and
let's see what they come up with.
This experience createdunderstanding of what it's really

(06:39):
augmenting of our environmentwith additional information.
Essentially it's two things.
It's ability to bring thingsin real time to the user,
and at the same time, thisshould be relevant information.
So this understanding of combiningthese two requirements and creating
useful tools is essentiallywhat gives us fertile ground

(07:04):
to develop a technology that is at thesame time very, very low on latency.
So essentially giving things in realtime and at the same time with a
low power consumption.
So it allows the usersto use it everywhere.
You're using some words,

(07:25):
but you're never sayingartificial intelligence.
And of course that's the big buzzword now,
but of course,
is there artificialintelligence somewhere in your
solutions?
Is that something that ispart of the trend that you're

(07:46):
building on top of?Because if it's all local,
if your intelligence andyour processing is local,
well that's different thantaking advantage of off device
capabilities, of course.
So tell me about what's the role ofartificial intelligence, if there's any,
there doesn't have to be any.

(08:07):
Maybe clarification.
It doesn't have to be all localand do things in the cloud or off
your smartphone or anything near you,
but you need to make surethat it doesn't break the
experience or it doesn't breakthe immersion or it doesn't break,

(08:28):
or it doesn't introduce a lot oflatency and lag. Especially again,
in industrial settings, maybea bit more dangerous settings.
You cannot tolerate that.
And so you can do a lotof pre compute in the
cloud and feed that to the device onthe basis of the sensory input that we
receive. Now,

(08:50):
there's a lot of AI involvedand related to us everything to
understand the context andto understand the user,
the context of the user insidethe context of the world.
Those are two differentthings, is my worker
in a cognitive state to be able toexecute a certain task. Those kinds of

(09:12):
activities you can start to lookinto and help the user to better
understand, better focus on theactivities that they need to do.
There's AI involved in that.Also, the generative AI
era, which we're in nowis very, we fit into that.
We understand the contextof the world. We actually,

(09:35):
you could in a way say that we segmentthe world and we label the world.
We catalog all the objects in the world,
people in the world feedingthat into an AI model that then
generates whatever needs to begenerated given that contextual
information, that is very, very powerful.
Just an AI model with nocontext just gives you a random

(09:59):
prediction of the world, andthe more context you feed it,
the more accurate yourinterference will be.
And we are doing that part mostly.
Okay. Excellent. Excellent. Ilike the way you decomposed it.
You're saying it's not all or nothing,
it's contributing to the qualityof the data that's acquired

(10:21):
about the context,
and then also assistingin the experience that
returns to the user that's appropriatefor that user in that environment.
Like you said, the world context,not just that user's context.
But one very important thing.
Essentially all these augmentedreality or XR devices,

(10:45):
they are kind of empathic inthe sense that you want the user
interface to be personalizedto the habits of the user. Yes.
Very much.
So. Essentially, that means that thesystem needs to monitor the user,
and that is where themachine learning can be.
We are our own big data generators,

(11:07):
and it's very unique tothe, you cannot pre-program,
the device to be universal foreverybody or one size will fit all.
So thing that is being adaptable for the
habits and specific user.
Preferences.
Yeah, preferences, exactly.

(11:28):
That's all where machinelearning and AI comes in
quite significantly.
I think another area where itcomes in is in error reduction
or artifact and error management.
I know from the years that I'vebeen around this that things like

(11:49):
magnets, big magnetsI'm talking about, and
cars and big bodies of water,
for example, can interfere withthe sensor readings and cause
noise that is detrimental to
the final result.

(12:09):
It can cause errors and artifacts.
So do you have a specialway of dealing with those or
are you saying those interferencesor potential problems are
now filtered?
We do filter. Well,
the instance you say they haveinterference and they have these

(12:32):
artifacts that are affected byvarious environmental parameters.
Usually noise, noise is a bigparameter. I mean, just audio noise,
right? Not just video noise. Yeah.
Noise as a artifact ofa signal essentially.
So we create common filters.
So we are more on a rigorous sideof that or the scientific approach

(12:56):
to this kind of problem.
So we essentially model the behaviorof the environment or the system,
and then we create a noise modelthat is fed into the carbon filter.
And yes,
that is definitely oneway to make the system
better, to be robust to this kindof noise effects. And for example,

(13:18):
one of the components is a man'smirror that is definitely very,
very susceptible to external noise,
and that is where these kindof filters are being employed
to diminish the effect
of the noise in the system.
One of the ways that this is aclassic computer vision problem

(13:41):
is that there are reflections,
there are mirrors,
just glass or any polished surface
creates the illusion to thecamera that there's two,
the list of things that can cause these

(14:03):
errors in the data is very big,
very extensive.
And it requires a different approachto each type of error that you.
Yes, definitely. And this is whereartificial intelligence can be very,
very instrumental to create heuristics.With our approach, that is very,
very relevant.
Exactly.

(14:27):
And it's actually havingworked in computer
vision and 3D sensing and active systemsmostly because if you want to have the
device be self aware of what's going on,
you need to give it enoughsensory inputs to make
those aware, cautious decisions. And

(14:50):
those devices, even the one here thatwe all rave about in our industry,
there's a ton of sensors in there,
and a number of them are active sensorsproducing its own signal in order to
see back the relevancy. We dothe same, but we do it in a very
simple way, very complexin implementation,

(15:11):
but the fundamentalsare very simple where we
already reduce a lot of the noiseand a lot of the artifacts from the
sensing principle thatwe apply and the saying,
garbage in, garbage out.So the less garbage,
the more garbage you can filter out atthe source of your information all the

(15:34):
way at the beginning,
the less it passes on throughyour algorithms and to your decision making and to
your AI algorithms that needto make assessments on the basis of whatever single
comes in.
And so there's the pure fundamentalprinciples that we apply here allow us to
filter out a lot of the noisebeing it sunlight, being it noise,

(15:54):
thermal stuff, and keep a reasonable
signal quality all the waythrough the algorithms. Now,
there's always moments where you encountersomething you haven't seen before.
Essentially noise comes ornoise or complexity of the

(16:15):
algorithms comes from the fact thatthings might change quite tremendously
between the samples. Sowe try to mitigate that
in a smart way, increasing the samplingrate of the environment we are in.
And this way the deltas,
the changes between the environmentat any given sample position is

(16:38):
quite small. So that allows,
essentially simplify all algorithms
by reducing the changebetween the samples.
Yes. So you're gettingclose to my last question,
which is when you goto talk with a hardware
manufacturer,

(16:58):
a device eyewear or any wearable
device, maybe it's awatch or something else.
When you go to approach them,
what are the biggest challengesthat they have to overcome
for this whole,
we haven't talked about spatial computingor multimodal because that's all

(17:19):
implied, how you get there.
What are you finding to be whereyou have to help these hardware
companies? Are they worried about, I mean,
they're all worried about powerconsumption. They have to be,
but what are the things thatyou have to help them do?
Do you have to train them?

(17:39):
Do you have to do co-designwith them to integrate
your products?
The starting point is the fundamentalprinciple of our technology,
which is different from whatthe industry and let's say
especially the industry that integratessensors that take off the shelf

(18:02):
parts and add 'em to a device.
I'll try to explain it in a fewseconds. We're not a frame based system.
The image is called,
they take snapshotsevery 25 milliseconds or
33 milliseconds. The whole imageris being captured and read out,

(18:23):
whether it's a rolling shut or a globalshutter. It captures the whole thing.
And then that information is passedon for algorithm processing or ai,
and it's really thosesnapshots, one after the other.
Our data is very different. Weactually scan the world in a very,
very, very fast way. Like Borisexplains, which has a lot of benefits.

(18:46):
We can go up to a hundred megahertz,
which a hundred million samples asecond that's unseen in the industry.
Now. You get these samplesat specific intervals.
Every 10 nanoseconds,
we can give you a new sample ofthe world that looks much more
like an audio recording signal.
Yeah, yeah.
Signal. Exactly. Yes. Asignal. It's not an image. Yes.

(19:09):
Ah, very well explained.Yes. That is insightful. So
your partners, your current or futurepartners have to really change.
The way they think about the problem.
Exactly, exactly. Oh, that's exciting.
That ties into your earlier question,
what is the difference that breakthroughthat we're doing here from a technology

(19:31):
perspective, which hasn't been done inthe past or before. It's exactly that.
We look at a problem in a very differentway of what the industry is used to.
And of course we havebackwards compatibility layers so we can get it in there
and have current algorithmsrun on this type of data.
But the real powerful step to takeis if you really fully integrate

(19:54):
that signal processing on this typeof data, then you leapfrog forward.
That's a very,
very excellent explanation and an exciting
future. I can't wait tosee, I can't wait to,
I hope some of the devicesthat you will be in will be

(20:14):
promoting, Hey, we have Voxel Sensors.
Inside.
Inside. Exactly.Exactly. Exactly.
I look forward to seeing that. Gentlemen,
thank you very much for spendingthe time with me here today,
and good luck in yourjourney. Thank you. Yeah,
see you at Mobile World Congress.

(20:34):
See you there.
See you there. Bye.
Advertise With Us

Popular Podcasts

Stuff You Should Know
Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

On Purpose with Jay Shetty

On Purpose with Jay Shetty

I’m Jay Shetty host of On Purpose the worlds #1 Mental Health podcast and I’m so grateful you found us. I started this podcast 5 years ago to invite you into conversations and workshops that are designed to help make you happier, healthier and more healed. I believe that when you (yes you) feel seen, heard and understood you’re able to deal with relationship struggles, work challenges and life’s ups and downs with more ease and grace. I interview experts, celebrities, thought leaders and athletes so that we can grow our mindset, build better habits and uncover a side of them we’ve never seen before. New episodes every Monday and Friday. Your support means the world to me and I don’t take it for granted — click the follow button and leave a review to help us spread the love with On Purpose. I can’t wait for you to listen to your first or 500th episode!

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.