All Episodes

June 29, 2025 28 mins

In this episode, AR for Enterprise Alliance Executive Director Mark Sage sits down with Edgar Rojas, Assistant Professor at Texas A&M University, and a passionate advocate for extended reality (XR) as the interface of the future. Edgar shares how his work at the intersection of the College of Performance Visualization and Fine Arts and the Laboratory for Ocean Innovation is redefining how augmented, virtual, and mixed reality can move from academic concepts into real-world industrial application.

Backed by industry partners like the American Bureau of Shipping, Rojas and his students are diving into user experience design, technical capability testing, and practical deployment scenarios that aim to take XR out of the lab and into the hands of everyday users. Whether it's helping a local business owner or equipping enterprise operations, the message is clear: immersive tech isn’t just for tech companies — it’s a tool for everyone. Image from the Texas A&M website for the Laboratory of Ocean Innovation. Please visit the AREA.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:09):
I am Karen Quatromoni, Director of PublicRelations for Object Management Group,
OMG. Welcome to our OMGPodcast series. At OMG,
we're known for driving industrystandards and building tech communities.
Today's podcast will feature theAugmented Reality for Enterprise
Alliance (AREA),
which is the only global nonprofit memberbased organization dedicated to the

(00:32):
widespread adoption of interoperableAR-enabled enterprise systems.
We're here today with Edgar Rojas.
He's the Assistant Professor at theCollege of Performance Visualization and
Fine Arts within Texas A&M University.
Mark Sage, the executive directorof the AREA, will lead the podcast.

(00:52):
Mark?
Thank you Karen, and welcome Edgar.
Thank you very much for beingwith us on our podcast. Firstly,
thank you not only for thework that you are doing,
but the work you're actuallydoing with your students.
I've been lucky enough to meet some ofthem and listen to some of your projects
and it's very impressive.
It's subjects in the areasthat you are covering.

(01:13):
So thank you both for your amazing work,
but also what you'redoing with your students.
So rather than me talking too much,
why don't you perhaps introduceyourself and talk a little bit about the
university?
Sure. Thank you very much. First ofall, it's a great pleasure to be here.
I appreciate the window for me to comeand speak a little bit about that. Well,
yeah. So as Karen mentioned,

(01:33):
my name is Edgar Rojas and I am anassistant professor here at Texas A&M
within the College of PerformanceVisualization and Fine Arts. Now,
I think that throughout the presentationI'm also going to be referring to
a institution or a lab that we havethat is called the Laboratory for
Ocean Innovation,

(01:54):
which is within OceanEngineering here at Texas A&M.
So right now I'm in aproject that blends both
my knowledge from personalizationin my college with this
Laboratory of Ocean Innovation,
which it's entirelysponsored currently by ABS,

(02:15):
the American Bureau of Shipping.So throughout the presentation,
I'm going to be blending a little bit,
both of both ends as Italk about the different
topics.But so my core self
is I am a researcher and I'ma researcher specifically in

(02:36):
the AREA's augmented reality,virtual reality, mixed reality,
all the different realitiesthat we have at this point.
And the way in which I alwaysexplain my work is that I
believe that these technologies are meantto become the user interfaces of the
future, but we are not there yet.
And so with my workand what my lab does is

(02:59):
look at what are thetechnical capabilities,
user considerations,
and overall applications that we needto develop in order to get there.
So my long-term vision really is I wantto take these technologies out of the
lab and put them into the hands of thepeople, the ones who actually need them.
And that can be by people.It's very broad in here.

(03:22):
It can be the guy that ownsthe grocery store next door,
but it can also be in biginstitutions, I don't know,
Texas A&M if they need to incorporatethis technologies to improve the way that
they teach a particular subject.Or if you have people that I don't
know have limited vision for instance.

(03:43):
So how do these technologies need to beadapted to account for every possible
user that we can have?
So that's a little bit ofwho I am and I have right now
six grade PhD students as Mark was
mentioning. So I appreciate that.
I will definitely tell them that yousend them the regards. Please do.

(04:06):
Yeah. So our lab is theLaboratory for Extended and
Mixed User Realities, whichis like the acronym is LEMUR.
So overall we look into these areasto make these technologies more
accessible, more widespreadfor the end users.
Amazing.
And I think we'll come back into some bitmore detail and talk about some of the

(04:28):
work and your vision and mission,but maybe I can start with,
you mentioned ABS AmericanBureau of Shipping,
who've been a long time AREA member,and also yourself at Texas A&M,
but can you tell a little bit why theuniversity joined the AREA and what you're
hoping to achieve throughthe AREA membership?

(04:49):
Sure.
I will say that there's almost ashort-term goal and a long-term goal.
So in short term, as you mentioned,
ABS has been involvedin the AREA for a while,
and as we started this projectwith them, they were like, well,
as an industry we're part ofthis big effort of driving these

(05:10):
technologies forward.
So that's where when wegot introduced to the AREA,
upon talking with you and doing alittle bit of research on our own,
the amount of depth and content that
the AREA has been achievingthroughout the years, it's impressive.
So there is a lot of documentationand good starting points

(05:34):
for almost anybody that wants tostart going along these routes. So
within this world that evolves so fast,
it's hard to keep track of everythingthat is happening and who is doing what
and so on.
So being able to have a hub inwhich you can get together with
people that are interestedin driving this field

(05:58):
forward has been amazing.So if I
need to provide a moreconcrete example, for instance,
as we started this project with ABS,
we are interested in putting AR wearables
into the workers. For instance,

(06:20):
workers that are in a warehouse orworkers that are in a vessel across the
ocean.
But we want to make sure thatthese devices are safe enough
before we put them in action.So part of what we had to do,
first of all, was, okay,
what are the industriescurrent standards for this?

(06:40):
What are the recommendations of, yeah,please do this, or No, don't do this.
This headset works well inthese conditions and this one does not work well in
this conditions.
So just having that starting point byaccessing all the documentation and all
the works and workshops that the AREA has
been great, at least for us,
it really provide us a very solidpoint to start big picture. However,

(07:06):
it's not only about taking, wewant to be able to give back
part of that knowledge that we havebeen developing thanks to the AREA.
So we want to really,
as I was mentioning,
we want to improve thesetechnologies and make them useful
for the end users.

(07:27):
So we want to make theselasting connections with the
industry and with people thatare actually deploying or
are interested in deployingthese technologies into the workers, into the field.
There's just so manypossible combinations.
So now that we have beenconnecting with the AREA,
we really want to establish thoselasting industry connections and

(07:51):
overall right as well.
The college that I am partof in A&M, it is fairly new.
This is actually our, well officiallyas a college is the first year.
But as an entity, we havebeen together for three years,
which is still a short amount of time.
And I can give you details of howit got created later if you want.

(08:12):
But we currently have,
I would say about fiveprofessors that are involved
in different aspectsof AR/VR technologies.
So our goal is really toposition Texas A&M at the
forefront of AR research.
And you can only do that ifyou are hanging out with other

(08:35):
experts in this field and people thatactually know what's happening that know
what the real problems are and that theyare willing to collaborate with you to
be able to make there.
Yeah, perfect. Thank you.
Normally I'd ask the next questionabout the university's missions and
vision, but I think you've done areally good job of articulating that.
So maybe we can just think a little bitor you can give some examples of how

(08:59):
you're actually delivering that vision.
Maybe a couple of the projects ormaybe some of your research, just that,
but meat on the bones if you like,
in terms of the actual greatwork that you're doing.
Sure, sure. I appreciate that for sure.
I would say that currently I amgearing, well, let's just start

(09:21):
a little bit behind. So
the overall idea of mycollege is to be able to
blend technology with the end user,
with an actual human. It's notlike traditional computer science,
for instance,
in which you are doing a lot oftesting to improve the speed of an

(09:45):
algorithm by four seconds.That's great on its own.
That's not what at least I door the college itself does.
So it's really more thisconnection of, okay,
how is it actually a human aperson? It's not only a human,
it's an actual user that will wearthis technologies. Think about it.
So is it easy to use?Is it understandable?

(10:08):
Is it getting in the way of thatperson performing their tasks?
So with that in mind, a VS approach,
ocean engineering and US, because as Iwas mentioning at the very beginning,
they are interested in pushingforward this next level of maritime
research. They created a structurein which they have several

(10:31):
projects and we are one of those,
I think we have nineprojects at this point,
looking into different elements of
alternative fuel sources and so on.
But within the one that I'm a partof is, as I was mentioning before,
we want to make sure that thesedevices are safe. And so for instance,

(10:54):
at the very beginning of the project,
and we have been very fortunatethat the project is really
interdisciplinary.
So one of the first things that wehad to do was to go to the Port of
Houston, port of Galveston,
and we got into one of thetraining ships that was in there
and we took 360 degree pictures of this

(11:17):
space and then we were able to recreateit in our lab in college station.
So we have now this simulationor almost like a physical
slash digital twin of that specific room.And then we put
sensors all over the place of this space.
And so through that we're askingpeople to complete different tasks,

(11:38):
and as they do these tasks,we're able to evaluate,
are you bumping your head asyou're going through a hatch door?
Are you stepping on a pipe?
Are you tripping as you tryto get into these areas?
So that setup is reallyallowing us to see, okay,
is the fact that you are wearingor not these devices affecting

(12:00):
your ability to perceive yourenvironment, perceive your surroundings?
Ultimately the idea wasto address the question,
are we ready to put thesedevices onto a worker that's in a
vessel? And long story short,
we still need a lot of workto be able to be there.
And so with what we'retrying to do here at our lab

(12:23):
is what are the componentsthat we need that we have
already identified that we need thatcould potentially translate into solutions
that would assist workers asthey're performing their tasks to
ensure their safety. For instance,
I have one of my studentsright now is working on,

(12:44):
as you are walking around this area,
in this simulated environment,
can the headset detect whereyou are and let you know,
Hey, you're approaching a sharp edge. Hey,
you're approaching this hatch door.
And because of all thedocumentation that we have, we know,

(13:06):
okay,
it's very common for people to bump theirhead as they're walking through a hash
door or it's very commonfor people to trip in
housekeeping stuff thatit's laying around.
We were developing smart solutionsfor the headset to identify these
things and let the user know, Hey,
there's a highlight around the edge ofthe hatch door. Be careful about that.

(13:32):
So that's within that project.But we're also very interested in,
I have, for instance,
another one of my students islooking into the area of remote
coaching or remote refiningsomeone's movements.
So imagine that you are far away inan asset in the middle of the ocean.
It will be very time consuming to sendsomeone that has all the expertise,

(13:55):
like an actual mentor and expert all theway there when you already have people
that know how to do the things orthat speaking know how to do things,
but they don't have allthe necessary expertise.
But a lot of those thingsare not just do this,
do that based on what I'm seeing.A lot of those require actually

(14:15):
feeling what you're doing orhow much pressure you're using,
but there is no way ofremotely transmitting
a subtle movement correction.
So he is looking into an approachof by combining AR visuals

(14:36):
and a little bit of right now he's using
musculoskeletal stimulation,
those patches that you put whenyou need a massage type of thing
to see if by combining those two inputs,like the tactile input and the visual,
can you give the person theidea that someone is there

(14:56):
subtly moving your armaround, for instance.
So overall, as I was mentioning,
so we want to make these technologies more
widespread and assist people intheir day-to-day tasks and routines,
whether that's in the industry,
whether that's in their personallives through these technologies,

(15:21):
we really believe that it's more tothem than just entertainment for sure.
Good agree more.
And I think there's some amazing kind ofinsights to some of the challenges with
exer and you focus a littlebit on the enclosed space and
you having the right information,
but also where your focus may be onsomething else, which is really great,

(15:44):
but also some of the kind of solutions.And I guess that's, if it's okay,
some of the insights that you'velearned through doing some of this work.
And I know it's work inprogress and my apologies,
I'm asking for stuff that'sgoing through your process,
but maybe one or two things thatspring to mind that you think, wow,

(16:04):
we've been really insightful there,
or that it's something whichwould really help the industry.
I'd love to get your thoughtson some real nuggets.
Sure. Well, as you were mentioning,
it really is work in progress, but

(16:25):
as we have been doing thiswork, the more that we do,
the more we realize it'snot only is not only
about XR, right?
There are just so many things that weneed to take into account right now, But
one important thing thatwe're moving towards creating,
and in fact I actually have a meetinglater today to discuss with another

(16:48):
professor that is interestedin assisting us with this,
is I've been talking aboutdisability of insurance safety or
knowing how aware areyou of your environment?
But so far that is very subjective.
You can ask people, Hey,
how did you feel after going throughthis or how distracting you were by

(17:12):
the AR visuals, for instance.
But there is no way of saying, okay,
at this particular moment in time,
the probability that you had of gettinginto an accident was, I don't know,
70% or 60%.
And the reason why there is nothing likethis is because there's so many factors

(17:35):
that can influence into thatsomething as, okay, how the,
is the environment housekeeping stuffor sharp edges or something like that.
There are also factors,
how fast are you through it? Are youwalking backwards as you're using these
devices? And of course there areother things that we have no idea.

(17:55):
So maybe you had a fight with yourpartner in the morning and now you're all
distracted, right?
There are definitely things that we cannotaccount for what we are looking into,
what can we account for?
And so what we're moving towardsnow is to develop these specific
solutions to try to address what we
can while providing aquantifiable metric of

(18:20):
Okay. Right. Now thisis, as I was mentioning,
if you were to take a snapshotof everything at the second,
about two minutes and 30 secondsafter you started working,
what is the probability that you willget into an accident in terms of the,
so yeah, so that number is one thingthat we're working currently trying to

(18:41):
compute.And as I was mentioning,
it has to do with intrinsicbehaviors of how fast are you moving?
Are you walking backwards? Howfast are you turning around?
It has to do with the environment.
Are you close to a surface that is sharp
or is it behind you and you haven'tnoticed it yet? The ability,

(19:04):
and one of the great things of thesedevices is that because we have further
cameras, we can do visionprocessing and even things that
you are not currently looking at, wemight be aware that they are there.
And especially if you consider that wemight have a digital twin of the space in
which we are working,
we will know at any given point whereyou are in that space and therefore you

(19:28):
can let people know, hey, this ishappening. Be careful and so on.
Another big aspect of these devices,
particularly when we're talkingabout optical see-through AR devices,
is you are putting this3D graphics into your
field view as you areperforming your tasks,

(19:49):
but that on itself can bedistracting, can be dangerous.
The place in which you positionyour graphics may occlude,
I dunno, the sharp edgethat you want to prevent.
So we also want to equip thedevices with a more understanding
the environment of therelationship of the user,

(20:12):
the worker with the environment at thatparticular point. And so for instance,
if I know that the person is workingwith this machine, then first of all,
don't put me a panel right in front ofmy face because I'm definitely not going
to be able to do that.
But can it be as smart enoughto the positioning of those

(20:32):
virtual panels? Can they beas smart enough to know, okay,
this person is working with this device.
Maybe I should move the panel aside sothat you still get the information but
not in your way. And as youmove around the environment,
there are particular moments in which youmight not require all the information.
So there is not that big of abenefit of having the entire panel.

(20:54):
So maybe it's about putting it
in the screen space almost as amonocular display would work, right?
Or maybe just don't show the visuals atall. Right? So What is this transition?
When do we need really to getthe information and so on?
It's also one thing that one of mystudents is looking at right now.
So right now the point inwhich we are is we want to

(21:18):
develop this solutionsthat I was describing.
So we are working in this three access,
one of the safety of the user interfaces,
the one of understanding the environmentand notifying the person about hazards
around and also the one ofinterpreting the user behavior of

(21:39):
how fast are you moving and so on.
And so we are working towards developingand we have some prototypes at this
point that we can integrate into.
We're working right now with, we havemultiple devices that we're using.
We want the solutions to be likedevice agnostic. And in fact,
what we really are focusis not on describing

(22:04):
of coding a solution itself,
but providing the guidelinesof how do you need to do this?
So we have these different directions,
but in addition to having the solutions,
we want to be able to compute thisnotion of how likely are you to get
into an accident. And so by Having thatnumber then we can really evaluate,

(22:27):
are these solutions actually helping?And not only these solutions,
but anything else that you mightwant to add as an AR graphic,
as a computer approach, anything.
Then we will have a closedloop to evaluate, okay, this is what you're adding,
this is how it's affecting,so improve it or not,

(22:47):
but we are going to beable to close this loop.
And in that having atangible way of knowing
how big of an impact is your AR
solution causing in theactual worker. Right.
Amazing. And it isfascinating listening to all.
I try and try and summarize all, whichI'm not sure I'm going to do it justice,

(23:12):
but we've talked a lot within the AREAthat actually quite a lot of the user
experience is stillcoming from a 2D world.
We're replicating what wehave on phones and on PCs.
What I would say is you'retaking that step to say, look,
we need to design and have experience,which is in the spatial 3D environment.

(23:32):
It needs to be aware of our surroundings,
it needs to help us do the best things,
but also not do the bad things interms of being able to move around.
And I think one of the best things Ican say is that you're actually to look
at and research designingsolutions for an augmented
reality world rather than takingdesigns that are 2D and trying

(23:56):
to stick in the 3D world.
So hope that at least did a little bitof justification of the work that you're
doing.
So maybe the final last quick question and
really about your insights. Asidefrom obviously joining the AREA,
what advice you would give to companiesstarting out with Enterprise XR?
You've been doing lotsof interesting things.

(24:17):
What would be say your two bullet pointthings that you would say to them if
they're starting out? And again,
that could be a provider of the technologyor it could be an end user of the
technology. You can choose which one.
That's a great question.If I have to pick two,
I would say,
and just by the AREA existing thatbecause of the mission and vision

(24:41):
of the AREA, it covers what I'mabout to say. But first of all,
you don't need to reinvent the wheel.
There is a big community thathas been working in these topics
that might already have a solution to aproblem that you are facing right now.
So you don't have to spend all yourresources and recreate something that you

(25:05):
very well could have spokenwith someone and like, Hey,
I really are interested in doing this.What recommendations can you give me?
Or can we create some sort of apartnership and so on within this
field? And again, this is why Ibelieve that the AREA is impressive.
Everybody wants to get into the AR wave,

(25:26):
but a lot of times they startthinking about it, okay,
how can we do it and make it be our own?
And I see the impact on that,
but in a world thatit's so interconnected,
it's better to work in a team.
And that leads to my second point.You don't have to work in a silo.

(25:49):
We have this big community of, again,
like-minded people that are interestedin really pushing the boundaries of
AR technology. So share yourfindings, be part of this community.
You will be surprised of how manytimes people are, because again,
these are people that arepassionate with this field,

(26:10):
so they really want toshare what they're doing.
They really want to work together.
The AREA has a lot of different committeesthat people can become a part of
that. I don't know if you are moreinterested in the safety aspect
of ar. So there is a committee thatmeets regularly to discuss about that.
If you are more interestedin, I don't know,

(26:31):
the human factors or thecybersecurity aspects of this,
there is a committee for that.
So really be a part of acommunity, get involved.
I think that there is a lotof benefit in working together
to tackle this. And
now as a final thing thatI just thought about,

(26:57):
thanks to the way that thesetechnologies are evolving,
you no longer need to be a corecomputer scientist or a core
computer vision researcher todo meaningful things in here.
Because this field isso interdisciplinary,
you can put your 5 cents into
this whole topic with your expertise.

(27:20):
I think everybody will bring avery valid point to the table.
And so don't feel that becauseyour field is, I dunno,
nursing or education,
you are not going to be bringingvaluable insights to the equation.
I think that everybody, again, with afield that is this interdisciplinary,
we really need to hearall the perspectives.

(27:42):
So that's also part of why you should be
getting involved with this because weessentially need to hear your voice as
well.
And I think that's a great way to enterin terms of augmented reality will be
the next compute and paradigm.We don't have to say it,
the leaders of the tech companies aresaying that and spending a huge amount of

(28:02):
money,
but it does mean and there's anopportunity for everybody to get involved.
So I think that's a really great wayto finish up this fascinating talk.
Edgar, thank you very much indeed.
Look forward to your continued supportand insights for the AREA and keep up the
great work and yourstudents as well. Thank you.
Thank you very much. I appreciate thetime today. Have a great day then.

(28:23):
Thank you.
Advertise With Us

Popular Podcasts

Stuff You Should Know
Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

Las Culturistas with Matt Rogers and Bowen Yang

Las Culturistas with Matt Rogers and Bowen Yang

Ding dong! Join your culture consultants, Matt Rogers and Bowen Yang, on an unforgettable journey into the beating heart of CULTURE. Alongside sizzling special guests, they GET INTO the hottest pop-culture moments of the day and the formative cultural experiences that turned them into Culturistas. Produced by the Big Money Players Network and iHeartRadio.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.