Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:04):
Welcome to tech Stuff, a production of I Heart Radios
How Stuff Works. Hey there, and welcome to tech Stuff.
I'm your host, Jonathan Strickland. I'm an executive producer with
iHeart Radio and I love all things tech, and we're
going to enjoy another classic episode of tech stuff in
this episode. And uh, I promised new ones are right
(00:26):
around the corner. I've got a cool one coming up
in just a little while that you definitely want to
check out. Very important, all about the dangers of browsing
the Internet in an insecure way. But before we get
to that, I thought maybe we could revisit this classic
episode in which I talk about augmented reality. So this
(00:48):
episode is titled Augmenting Your Reality, and it originally aired
just a couple of years ago, and I thought it'd
be fun to revisit it. So let's sit back and
enjoy this classic episode. So I thought I would do
a deeper dive, a bigger explanation about what augmented reality is,
what it's all about, how it works, and sort of
(01:09):
the applications we might put a r toward things that
you know, was it good for tons of stuff? As
it turns out, So the first thing we should do
is probably defined some terms, because if you haven't really
looked into augmented reality and you aren't familiar with a R,
(01:30):
you might just be lost. I'm gonna define it all
for you right now, because that's the kind of stand
up guy I am. Technically speaking, augmented reality is using
digital information to enhance or augment and experience in our
physical real world. So the way we usually see this
(01:51):
implemented involves some sort of display that has an image
of the real world on it and it overlays digital
information on top of that itch. So think of like
a camera's viewfinder, like an LCD screen on a camera,
and it actually labels the buildings that are in view.
When you're out on the street and you hold the
(02:11):
camera up or a smartphone or even a wearable device
like a head mounted display that you can look through
so you can see the real world, you're not just
staring at a screen, or if you are staring at
a screen, you're staring at a video feed that is
provided by an external camera amounted just on the other
side of the screen. So it's like you're looking through
(02:32):
a display in the first place, but then on top
of that view you have this digital information. That's the
most common implementation we talked about, but it's not the
only one. Augmented reality does not have to only be
or even involve visual information at all. You could have
audio only augmented reality, for example. But the whole idea
(02:54):
is that it's something that's created digitally to enhance your
experience in the real world. Now we can contrast this
with the concept of virtual reality. Virtual reality, of course,
is a term where you create an experience completely through
computer generated means computer is making all the things you
(03:16):
see and here, and maybe even beyond that if you
have really sophisticated UH setups, so you might have some
haptic feedback. Haptic refers to your sense of touch, So
if you have haptic feedback, that means you're getting information
feedback through your sense of touch. Common example of this
is a rumble pack inside a game controller, where you
(03:38):
know you fire a gun and a first person shooter
and your controller rumbles as a result, letting you know
that you are in fact, unleashing virtual destruction upon all
you survey. Well, the same thing can be true with
virtual reality set up. So virtual reality is all about
(03:58):
constructing an artificial reality a simulated reality. Augmented reality is
all about enhancing the one that we are actually in.
And then there's also mixed reality. Mixed reality is kind
of sort of in between the two. You might have
some physical objects within a room that are also mapped
(04:21):
to a virtual environment, and then you use something like
a head molded display to enter the virtual environment. That's
what looks like you're inside, but you have physical objects
in the room around you that are also mapped to
the virtual world, Meaning you could pick up this physical
object and you would see that reflected within the virtual world,
where you might pick up a sword and shield or
(04:43):
move a chair or something along those lines. So augmented reality,
virtual reality, and mixed reality are all kind of inter related,
so much so that their histories also are very much
inter related. And there's some people who try to elect
these different technologies, these different approaches and put them under
(05:05):
a common umbrella, and they tend to use the phrase
alternate reality, which is unfortunate because that's also a r
but alternate reality is kind of the umbrella for virtual,
augmented and mixed reality. Uh that that that kind of
gives you the definition of those basic terms, and it
is important to understand them because they're becoming more and
(05:26):
more important today. You are already probably aware of a
lot of VR headsets that are out there on the
market as well as VR uh Well, they're they're kind
of like cases that you slide your smartphone into, so
your smartphone becomes the actual display on a VR headset.
The headset itself is more or less just a head
(05:50):
mounted case for your phone. We've seen a lot of
those come out over the last few years. We've also
seen a lot of a R applications come out, typically
for things iPads and smartphones, but we've also seen some
hardware come out that for wearable devices that falls into
the augmented reality category, stuff like Google Glass, which i'll
(06:11):
talk about more a little bit later in this episode.
For augmented reality to work, to get this enhanced experience
of reality around you, there are a lot of technological
components that have to come together so that you actually
do get an experience that is meaningful. You know, you
(06:31):
you have to have technology that quote unquote knows where
you are and what you are looking at or what
you are close to in order to get that augmented experience.
It wouldn't do me any good. If I put on
an augmented reality headset, for example, and stared at let's
say a famous painting, and instead of getting information about
(06:53):
the famous painting, I see an exploded view of an
car engine, that would make no sense. So you have
to build in technologies in order for the a R
to understand what it is you're trying to do and
to augment that experience, which meant that we had to
wait a pretty good long time for the various technologies
(07:16):
that we used to to create this relationship to mature
to a point where it was possible. Sometimes we had
technologies that would allow us to do it, but it
required uh tethering headsets to very large computers, which meant
that you didn't have really any mobility, and uh it
really limited the usefulness of the actual application. In other cases,
(07:42):
you could say things like your head tracking technology was
absolutely necessary for a R to develop the way it did.
GPS technology as well. Remember it wasn't that long ago
that we ordinary mere mortals didn't have access to really
accurate GPS information. For a very long time, that was
(08:04):
purposefully made less accurate. It was a matter of national defense.
It wasn't until the nineties that you started to see
GPS become more accurate for the basic consumer. Wait, back
in the day, you might get accuracy of up to
around a hundred meters, which is not great if you're
(08:25):
looking for the next place to make your turn. If
it's a hundred meters away, that's that's pretty far. But
now it's within a few feet, so it's much better.
That sort of stuff all had to come together in
order for augmented reality to become a viable I almost
said a reality, but that just starts to sound redundant
(08:46):
at any rate. Let's talk about some of these technologies
we are We really need things like gyroscopes, accelerometers. These
help devices understand their orientation where they are in respect
to something else, like are they for a smartphone it
might be is it in landscape mode or portrait mode?
But for a head mounted display, it would help give
(09:10):
the the unit the information that needs to know which
way you're looking, like are you looking to the east
or to the west. That kind of thing. UH also
compasses obviously very important GPS sensors, image recognition software, but
has become really important so that when you are looking
at something UH, the system can actually identify what that is.
(09:32):
In some cases you can get around this. You can
design an a R system where let's say you make
a movie poster and the a R application has the
movie poster or animate in some way if you hold
up a smartphone that's running the appropriate app. So I'm
just gonna take a movie from my past that does
(09:54):
not have an a R movie poster or associated with it,
but one that I can talk about as if it.
We're a good example, and that has to be Big
Trouble in Little China, universally declared the best movie that
has ever been made. So you've got your Big Trouble
and Little China poster up on the wall, and you
hold up your smartphone and you activate your Big Trouble
(10:15):
and Little China movie marketing app, and the camera on
your phone detects the poster it you know the posters
there well. The app and the poster together are able
to construct the augmented experience because there have been elements
put into the poster that the app is looking for.
And once the app identifies that, like, it sees maybe
(10:37):
eight different points on the poster, and because of the
orientation of those points, it knows what angle it's at
what height it's at in relation to the phone, and
can give you on your display the augmented reality experience.
In this case, it's obviously Jack Burton and the pork
Chop Express eating a sandwich, because, as we know, the
(10:59):
most riveting scene in the movie unfolds in this way.
So that would be kind of an augmented reality experience
where you didn't have to worry about every possible application
down in the real world. You made it for something
very specific, which means in your software you can have
the camera look quote unquote for these particular points of
(11:22):
reference and thus create the augmented experience in that way.
If you want to take that and move it to
the real world where you can see augmented information about
just the world around you, it becomes way more complicated.
You have to have very sophisticated image recognition software so
that the camera picks up the images, the software processes
(11:44):
the information, identifies what those images are, and gives you
the relevant information. So working with all the sensors, augmented
reality can make this a possibility. So another example, let's
say you're out on the street in Atlanta. You're here
in my hometown at and a Georgia and you're looking
at a building and you wonder what it is, and
you hold up your phone and you've got your little
(12:05):
map app that allows you to look at a real
world setting and tells you information about it, and it
tells you it's the Georgia Aquarium. Well, first of all,
you would probably know that already because the signage there
is actually pretty good. But the point being that this
would be something that would tap into the GPS cordints
on your phone, so it would know where your location
(12:26):
was and help narrow that down. The compass would tell
it what direction you are facing the the camera angle.
Also we have some image recognition going on there. The
accelerometer tells the orientation of the phone itself. All of
this data together would give the software the information needed
for it to display the label Georgia Aquarium on your phone.
(12:50):
And it all happens in an instant that's pretty amazing. Typically,
you also have to have some other method to communicate
with a larger infrastructure, because we don't have the capability
of building an enormously powerful computer that has all this
real world information programmed into it and make it a
(13:11):
handheld or wearable device. So usually you have to pair
these devices with some other larger infrastructure. Sometimes it's a
double handshake. For example, with Google Glass, you would use
Bluetooth to connect Google Glass to a smartphone. Then the
smartphone would have the connection to the larger internet through
your smartphones UH cell service provider. So while you're experiencing
(13:39):
the augmented reality through the Google Glass, it's actually communicating
through your phone to the infrastructure to get the data
it needs to show you the information. It's showing you
very important elements. And all of these components, like I said,
came together more or less around the same time. Most
of them were being developed independently of each other, and
it's just that now we're seeing them all converge. That's
(14:01):
an old favorite word here at tech stuff converge together
to create the augmented reality experience and make it possible.
So how did we get here? How did these different
elements develop? Well, there are a whole bunch of technology
pioneers who really create the foundation for augmented reality as
well as virtual reality and mixed reality. But one that
(14:22):
I think we really need to concentrate on at first
is Ivan Sutherland. Now. Sutherland was born in Hastings, Nebraska
in nineteen thirty eight, and as a kid, he was
fascinated with mathematics, particularly geometry, and also with engineering. He
began to study and experiment with computers while he was
(14:42):
in school, and this was at a time where personal
computers weren't a thing. There were no personal computers at
this point. Computers were actually pretty rare, and they were huge,
and in fact, they often would rely upon physical media
formats like punch punch cards or paper tape to read
a program him. So you didn't even have a disc
(15:03):
or like certainly nothing like a USB thumb drive or
anything like that. You you actually had to put physical
media into the machine for it to read and then
execute whatever program you had designed for that device. He
went to college at what is now Carnegie Mellon University
on a full scholarship. He graduated with a Bachelor of
(15:24):
Science degree. He would then go on to earn a
master's degree at cal Tech and a PhD in electrical
engineering from m I T And actually his doctoral thesis
supervisor was Claude Shannon. And we talked about Claude Shannon
back in the two thousand and fourteen episode Who Is
Claude Shannon. Um. We recorded that not too long after
(15:45):
Shannon's passing, So if you want to hear a really
interesting story about a pioneer in computer science, you should
go check out that two thousand and fourteen episode. Back
to Sutherland. For his thesis, he created something called sketch
pack AD and that was really, by most accounts, the
first computer graphical user interface, or gooey. A graphical user
(16:07):
interface means that you interact with the computer through graphics
representing various commands on the computer. Windows and the Mac
operating system are both examples of graphical user interfaces, as
is the interface on your smartphone. If you have a
smartphone where you choose applications on a screen, that's a
(16:28):
graphical user interface. Well, Sutherland created what is largely considered
to be the first one of those. After college, he
entered military service and he was assigned to the National
Security Agency. We have great friends there. I assume I'm
sure they're listening, because they're listening to everything at any rate.
He entered the n s A as an electrical engineer,
(16:50):
and in nineteen sixty four he replaced J. C. R.
Lick Lighter as the head of DARPA's Information Processing Techniques
Office or ip TO. And also by back then DARPA
wasn't DARPA, it was just ARPA. Uh so this is
the same group, by the way, that would end up
doing a lot of work that would form the ARPANETTE
(17:12):
a few years later, and the Arpanette was the predecessor
to the Internet in some ways. At least, the ARPANET
was what ended up being the building blocks for the
infrastructure that would become the Internet. Now, all of that
work happened after Sutherland had already departed the organization. His
work became a fundamental component of both virtual and augmented reality.
(17:37):
As I mentioned earlier in n he wrote a piece
and essay. It's very short, it's very easy read, and
you can find it online. The title of the essay
is the Ultimate Display. And if you ever do any
research and virtual reality or augmented reality, this essay is
going to pop up in your research, so go ahead
and read it. It's like two pages long, so it
(17:58):
goes very quickly. In that essay he talked about several ideas,
including the idealized display, the ultimate display, something that would
be the furthest you could go with display technology. Now,
keep in mind in his time, by his time, he's
(18:18):
still alive, by the way, but this time in the
nineties sixties, he uh, you know, things were just restricted
to monitors. You might have a light pen, but usually
you would just use a keyboard. Like it was pretty
bare bones. But he said, let's push this as far
as we can imagine it. And in his example, he
(18:40):
thought of a room that would be completely controlled by computers.
Everything you would experience within that room would be generated
by a computer. Everything you see here, smell, taste, and touch,
all of it generated by computers. The computer would even
be able to form physical objects out of ror matter itself. Now,
(19:01):
he wasn't suggesting that this would ever be a device
that we would actually be able to build. He was
just saying, what is the ultimate incarnation of display technology?
And if you read it, you realize, oh, this is
where the Star Trek Next Generation writers got their idea
for the Holidack. But unlike Star Trek the Next Generation,
(19:22):
the Ultimate Display would not go on the fritz every
other episode and try to kill the crew. It was
better than that. The Ultimate Display was sort of a
a foundational like Philosophically, it was foundational for virtual reality
and augmented reality. This idea of a very immersive experience
(19:43):
where you, as a user are surrounded somehow by this
computer generated experience. And that's true both with augmented reality
and virtual reality and augmented reality. The real world is
still there, but you get this enhanced experience that is
completely computer generated. So in nineteen sixty eight, Sutherland and
(20:09):
a student named Danny Cohen would create a VR, a
R head mounted display or h MD, and they nicknamed
it the Sword of Damocles. Why because you had to
suspend it from the ceiling. It was too heavy to
wear on your head. You needed it to be nice
and sturdy. Uh. It included transparent lenses, which meant you
(20:33):
could overlay computer information on the lenses themselves, and thus
you could look through the lenses at the real world
and have these wire framed graphics on top of what
you were looking at. And it also had a magnetic
tracking system, meaning that it had sensors that could detect
magnetic fields, and as you turned your head or you
(20:54):
change the inclination of your head, it would change the
magnetic field and this would be relayed as a command
to the visual center. The actual lenses themselves, so that
it would the change would be reflected in what you saw.
So if you have a virtual environment and you turn
your head to the left, you want the view within
(21:14):
the virtual environment to go to the left too, But
without head tracking technology that's impossible. So this was a
very early example of head tracking technology. And again it
used magnets magnetic fields in order to do that. Uh.
Obviously it's also really important for augmented reality. Again, if
the A R system doesn't detect that you are looking around,
(21:36):
then you're not getting relevant information, not for the specific
thing you are looking at. Anyway, as I said, the
graphics were pretty primitive. There were wire frame drawings, but
they still showed that this was a viable approach to
technology using HMD for augmented or virtual reality use. Oh
(21:57):
and one other note I should make so a lot
of people say the sort of Damocles was the first
head mounted display, and they say, you know, this is
the first HMD was made in nineteen I take issue
with that. I don't think of the sort of Damocles
as the first head mounted display. That to me should
go to a different invention called the head site h
(22:20):
E A D s I g h T. Now that
was developed by Phil Coo, and unlike the sort of Damocles,
it didn't create a virtual world. Instead, the head site
was sort of a remote viewfinder for a video camera.
So imagine that you've got a camera mounted on a mechanical,
(22:43):
uh swiveling mount, so you can move it left right,
you can change the orientation the inclination as well, and
then you have that mapped to a head mounted display,
so that if I put the display on and I
looked to the left, the camera pans to the left.
If I to the right, it pans to the right.
That sort of thing. It was meant to be a
(23:04):
way for for people to operate a camera in a
remote location that might not be very friendly to a
human being standing there. For example, the exterior of an aircraft.
You could have a camera mounted on the outside of
your aircraft that would allow an engineer on the inside
to look around and maybe help a pilot land or
(23:27):
navigate in a dangerous situation, or just get an idea
of the status of the aircraft itself. This was very
much a technology that was being pushed by the military,
an idea to create more military uses using this technology.
To make the military more competent, more adept at very
(23:50):
rapid changing situations on the technology front. So headsite proceeded
the sort of Damocles by about seven years. That came
out around night sixty one. But again it wasn't a
virtual reality headset or an augmented reality headset. It was
kind of a like I said, a remote viewfinder. But
still I consider that to be the earliest head mounted display,
(24:12):
not the sort of Damocles. However, Sutherland would end up
going on to make lots of other contributions in computer
graphics as well as the overall concepts that would guide
both virtual reality and augmented reality development over the next
several decades. But now it'll be time for me to
kind of move away from Sutherland to talk about some
(24:34):
other developments that were important in a R. And before
I get to that, let's take a quick break to
thank our sponsor. All Right, we just left off with
Ivan Sutherland. Now let's talk about a different father of
(24:57):
augmented reality, Myra Krueger or Doctor Myron Krueger. In nineteen
seventy four, Dr Krueger created an augmented reality lab called
video Place. Uh, he was really into this idea of
seeing the interaction of technology and people in artistic ways.
(25:18):
He really wanted to explore artistic expressions using technology and
people working together. So he wanted to create an artificial
reality environment that didn't require the user to wear special equipment.
You wouldn't have to put on a head mold display,
or wear special gloves, or use any kind of device
(25:39):
to control your actions, because that's a barrier between you
and the experience. Instead, his version consisted of a laboratory
that had several rooms all network together, and each room
had a video camera in it and a projector and
a screen. Now, the video camera would pick up the
(26:01):
motions of the person inside the room, it would send
information to the projector, which would then project the person's
silhouette on the screen. And the silhouette was typically a
really bright color, and you could move around and your
silhouette would move around, so you almost became like a
puppet master controlling your own silhouette. But then he started
(26:22):
to incorporate other things, like other elements that were virtually
on the screen. The projector was projecting things that were
on the screen but not in the actual real room itself.
So imagine a ball and a ball is being projected
on the screen. Well, you could move around so that
your silhouette would interact with the ball and the ball
would bounce away. That sort of thing. So you would
(26:43):
be able to interact with virtual environments by moving around
in a real physical space. And while those objects weren't
really there in front of you, you could see the
representation of them on the screen. And this was really
powerful stuff. And remember I said these rooms were all
network together, so you could actually have a system where
(27:04):
a person in one room and a person in another
room both have their silhouettes projected together in their respective
rooms on the screen, and your silhouette would be one color,
the other person's silhouette would be a different color, and
you could interact with one another. And according to reports
(27:25):
from this art experiment, they noticed that whenever people would
have their silhouettes cross one another, they would actually recoil
in their physical rooms. Keep in mind, they're in different rooms,
they're not in the same one together, they would recoil
as if they had made physical contact or bumped into someone.
So it showed that there was a very powerful psychological
(27:47):
element to this virtual presence. And again that psychological element
plays a huge important role in VR and a R
research and development, not just for creating products, but just
to understand how we process information and incorporated into our
sense of reality. Not to get too deep for you guys,
(28:08):
So experimentation in the field continued over the years. In
the early nine eighties, Dr Krueger would write a book
and publish it about artificial realities. But while the principles
for augmented reality were established, the technologies were still rather unwieldy.
They were large, they weren't reliable, and it would require
(28:28):
several years of work to improve those technologies, to create
miniaturization strategies to get the elements down to a size
that was more practical for that sort of use and
wouldn't require you to have a head mound display mounted
to the ceiling. And all of that took time, but
you could tell that the ideas underlying augmented and virtual
(28:50):
reality were already in place. There was a Boeing researcher
named Tom Coudell who coined the term augmented reality, and
he was specifically using it to talk about this approach
to overlaying digital information on top of our physical world
to enhance it in some way. Now Dr Cardell earned
(29:12):
a PhD in physics and astronomy from the University of Arizona,
and before contributing the term augmented reality to the public lexicon,
he did extensive work and artificial intelligence research and development.
He also became a professor in the fields of electrical
and computer engineering at the University of New Mexico. So
when he was working with Boeing, he used this phrase
(29:34):
to talk about specific system he was working on, an
augmented reality system, and the whole purpose of this was
to help people who were helping construct airplanes lay cables properly.
The whole idea was to use this system so that uh,
an electrician can see exactly where the cable needed to
(29:55):
go inside the partly constructed cabin of an aircraft aft,
and that way you could follow the directions that you
see through your display, lay the actual cable down where
the guide tells you to go, and then you would
have a properly wired airplane. Uh. And I'm sure, as
(30:17):
we're all aware, properly wired airplanes are good airplanes. Improperly
wired airplanes are not so good. So it was a
very important system to make this much more smooth and
fast and it meant that you didn't have to have
as as many experts to guide the process. You could
(30:38):
actually have someone come in who had never done this
before and just follow the directions through this augmented reality
set system and they could wire the airplane properly. So
really clever means of using augmented reality. Also, we would
end up seeing that same sort of philosophy used again
(31:00):
and again in the future in more sophisticated UH types
of technology, but it was the exact same approach, exact
same idea underlying it. In Lewis Rosenberg proposed a system
that the Air Force could use to allow someone to
control devices from a remote location, and that consisted of
(31:20):
a video camera which would provide the visual data to
the user through a head mounted display. They would wear
the display on their heads or they would look at
a screen, but typically they'd wear a display, and then
they would also wear an exoskeleton on their upper body
that would allow them to control some sort of robotic device,
(31:42):
typically robotic arms. And usually the way this would work
is that the display was designed in such a way
with the video camera so that the view that the
person had it made it look like the robot arms
were their actual arms, which required a little bit of
trickery on the part of Rosenberg. They had to fudge
(32:03):
the distances between the video camera and the robotic arms
to give this this sort of feeling that the robot
arms represented your actual arms. So you move your arms
inside the exoskeleton and the robot arms would move as
well at their remote location, so you it's kind of
like a really fancy remote control. Now imagine that the
(32:25):
robot arms are holding various tools. Uh. The suit would
also provide haptic feedback, that touch based feedback to let
a user know more about what is going on when
they're operating the arms. So if you were to have
do something that would make a robot arm encounter resistance,
then you would feel haptic feedback in the suit that
(32:47):
would indicate, oh, you're you're going beyond the parameters of
where this robot arm is capable of going. So you
learned very quickly where where you can operate within that
suit and make sure that you are not pushing it
beyond its limits. You could also uh end up using
these tools to do various things in this remote environment. Now,
(33:09):
Rosenberg called his system virtual fixtures, which meant that the
user would see these virtual overlays on top of a
real environment that they were looking at. So I'm going
to give a very basic example that to illustrate this,
because it's hard to imagine, it's hard to get it
across in words. But let's say you're looking through a
headmund display and in front of you is a board,
(33:32):
wooden board, and it's just a regular wooden board. There's
nothing painted on it or anything in the real world,
and it's in a room that's across the building from you.
You cannot see this with your own eyes. You can
only see it through the video camera. The virtual fixture
overlay might be a series of circles, and the circles
(33:53):
are things that you were meant to cut out of
the board using the robot arms and a tool that's
right there in side the physical environment, across the building
from you. So you follow the patterns that you see
in this virtual overlay and you complete the task. That's
a very simple example, and uh, this system was meant
to allow for that. That's what he would call the
(34:15):
virtual fixtures, these overlays that you would see that would
appear to be real, but actually we're not present in
the physical environment itself. Now, also in a group of
researchers at Columbia University, we're proposing a system that they
called the knowledge based Augmented Reality for Maintenance Assistance a
(34:38):
k a. Karma Cute. Their approach was pretty novel. They
pointed out that while augmented reality had tremendous potential, it
also had a really big barrier and that it takes
an enormous amount of time to design or animate and
implement these graphic overlays for a R applications. So say
(35:00):
you're in a room and you're looking at different objects,
and little labels are popping up for each object. If
you're having to do all that by hand, it takes
a huge amount of time. What they wanted to do
was create artificial intelligence systems, or at least techniques to
generate graphics automatically on the fly. So this would be
(35:22):
similar to using image recognition software, so that if you
look at a specific box, let's say, the image recognition
software might be able to map that box to a
specific product and thus give you an overlay of information
about the product that would be inside that box, and
it would be able to do all this automatically. It
would not require a human programmer to go through and
(35:45):
and look at every single product in every single type
of box and program all that out. That would be ridiculous,
It would take forever. So it was the work of
this group with Karma that really started the ball rolling
with this AI approach to automatically fill in that information
and make a are a more practical experience. Around the
(36:08):
same time, between ninety two and nine, the Laurel Western
Development Labs, which was a defense contractor, began to work
with the U. S Military to create a R systems
for military vehicles. And you can understand very quickly how
a R would have enormous potential for military applications. And
in fact, a R is very commonly used in lots
(36:30):
of different things like pilot helmets where it helps pilots
keep track of targets UH and identify a potential threats
that kind of thing. But in this case they were
really looking at creating a augmented reality system that would
create virtual opponents for people working in simulated wartime conditions.
(36:53):
So really a training program. Imagine that you're operating an
actual military vehicle like a tank, and you have a
view outside that is really an augmented reality system, so
you're actually looking at the real world around you. You
aren't just sitting in a simulator inside of building. You
are out there in the field controlling a real vehicle
(37:14):
moving around in real terrain. But you also see virtual
representations of enemies in that real terrain, and you can
practice maneuvers and firing on enemies that sort of thing,
probably not using live ammunition at that point, but having
a more realistic simulation in a real environment, so that
you're not just trying to create a totally virtual scenario. Anyway,
(37:40):
that work was done in the world, wouldn't really learn
about it at large until about nine because that's the
way the military works. They're not so eager to talk
about their stuff while are still doing it. Meanwhile, at
the same time, artists were continuing to explore the relationships
between physical performers and virtual LM. And so you remember
(38:00):
I talked about Dr Krueger earlier. While in nine different
artist Julie Martin would create a piece called Dancing in Cyberspace,
and in that piece, dancers on a physical space or
a physical stage. We're able to manipulate virtual objects, so
an audience would be able to see both the physical
performance by the dancers and the virtual reactions the things
(38:25):
that happened within the virtual environment as a result of
the dancers moving around their physical space. Pretty neat. Two researchers,
Recky Motto and Nagao created there the first real handheld
a R display, but it was a tethered display. It
wasn't free form. You couldn't just take it anywhere. It
(38:45):
was called Navy Cam, and you had to have a
tether like cable essentially connect the Navy Cam to a workstation.
But it had a forward facing camera and you can
use a video feed to go through this and held
device through the cable to the workstation, and it could
detect color coded markers in the camera image and display
(39:08):
information on a video see through view. So you can
get that augmented reality experience. Obviously very limited, you know,
you could not just carry this around with you everywhere
you go, but it showed the ideas behind augmented reality
could in fact be realized in a handheld format. Now
it's just a matter of getting those different components small
(39:29):
enough to all fit in a self contained mobile form factor. Now,
in the late nineties, we started seeing televised sporting events
featuring augmented reality elements, or at least you did. I
don't watch sports ball, that's not entirely true, but I
don't watch football or hockey, American football or hockey, and
(39:49):
both of those were the sports that really got them.
First off, I'm gonna backtrack. I used to watch hockey,
but then Winnipeg stole the Atlanta Thrash years from me. Winnipeg, Okay,
getting back to hockey. So hockey had the Fox track system,
which Fox put into hockey games so that you could
(40:11):
easily follow the puck. Instead of trying to watch this
little bitty black disc spinning around, You've got to watch
this very bright, highlighted neon colored disc that everyone hated.
And after about two seasons, Fox stop doing it, and
people were happy. Adila Thrashers moved away, and then it
(40:31):
was just miserable. American football would follow suit in the
late nineties and have the first down line introduced, where
they could on live video overlay the first down line.
Usually it's a bright yellow line that indicates how far
the offensive team needs to go. And by offensive, I
mean they're on the offensive. I don't mean they offend
(40:53):
my sensibilities. I'm not that against American football, but it
showed how far they would need to go in order
to estab a first down, which I am told is
something you want to do. Uh. That would start to
get employed in and over time we would see that
increase where eventually Skycam was able to even use this system.
(41:15):
At first it wasn't. You could get a Skycamp view,
but you couldn't do the overlay of the first intent
line until later. Well, I've got a lot more to
say about augmented reality, but before I do, let's take
another quick break to thank our sponsor. Okay, we're back.
(41:40):
Let's skip ahead to I guess it's not really skipping
I just talked about. Let's plot ahead to That's when
NASA's X spacecraft was using an a R system as
part of its navigational tools, so people back on Earth
could look at a view from the spacecraft a camera
(42:04):
mounted on the spacecraft, and on top of that view
they could overlay map data to help with navigation. And
all of that, of course was controlled back here on Earth.
But it was sort of an experiment to see how
augmented reality could be incorporated into space exploration missions. In
the future and make them more effective. Also, in the
(42:27):
Navy began work on the Battlefield Augmented Reality System or BARS,
which is a wearable a R system for soldiers. You've
probably seen various implementations of this over the years. Is
obviously evolved since nine It's one of those pieces of
technology that some soldiers took to, but a lot just
(42:48):
felt that it created unnecessary distractions. Technology and warfare is
very very difficult because there are sometimes where we think, oh,
more information and is always better, but in some cases
that doesn't seem to hold true. And for some people
with these head mounted displays or or really it's heads
(43:10):
up displays HUDs uh, that can sometimes be the case,
depends on the implementation. In two thousand, hiro Katsu Kato
created a software library called a R Toolkit. Very important
software library was also open source, so anyone could contribute
to it, modify it. Brent put out a new version
(43:32):
that sort of stuff, and it uses video tracking to
overlay computer graphics on a video camera feed and it's
still a component for a lot of a our experiences today.
Later on in the two thousand's this would be adapted
so that it could also be used in Web experiences,
not just native experiences to specific devices, and we continue
(43:56):
to see are built into new experiences, including smartphone and tablets.
By two thousand four, some researchers in Germany were creating
a r apps that could take advantage of a smartphones camera.
But two thousand four's pretty early for smartphones. It really
would would be a few years before this would truly
take off, because that's when Apple came out with the
(44:18):
iPhone in two thousand seven. That was the real revolution
in smartphone technology. There have been smartphones before the iPhone,
don't get me wrong, and many of them were really good,
but the iPhone was something that caught the public's attention
and made smartphones sexy. And because of that, there was
a ton of money poured into the smartphone industry as
(44:41):
well as not just Apple, but also to other companies,
like the companies that were offering Android smartphones. But I
think we can really thank Apple for all of that
happening in the first place, especially things like seeing that
accelerometer where you could switch from portrait to landscape mode.
I remember everyone freaking out about that when Steve Jobs
(45:02):
showed it off in two thousand seven at Macworld and
everyone thought, Wow, this is the this is amazing. Well,
we take it for granted now, but it was a
big deal then. So once that smartphone revolution happened, it
was a landslide victory for both augmented reality and virtual
reality research and development because it meant that so much
(45:23):
money was being poured into creating newer, thinner, more capable
smartphones that we saw an explosion in technological development that
could also be used for virtual and augmented reality experiences. So,
for example, I think of those sensors I talked about earlier,
(45:43):
accelerometers and gyroscopes, that sort of thing. Well, we saw
a lot of development in those spaces in order to
make smartphones better, and people who are working in a
R and VR experiences can take advantage of those same
sensors either creating apps specifically four smartphones. Thus you don't
have to build any other hardware, you just use existing hardware,
(46:05):
but that limits how you can use it, right because
you don't typically wear your smartphone directly in front of
your face. Or they could end up taking advantage of
those new, smaller sensors and incorporate them directly into brand
new hardware, like various types of wearables like Google Glass,
for example, but that would be a few more years.
(46:25):
In two thousand eleven, Nintendo launched the Nintendo three D S,
which included a camera. It was, you know, the three
D capable handheld device, and included actually a pair of
forward facing cameras, so you could take three D photos
if you wanted to, and it also had some a
(46:47):
R software included with it. You would get these special
Nintendo cards kind of like playing cards, and if you
were to point the camera of the three D S
at the card and look at the screen, you would
see a little virtual three dimensional character pop up on
the card. So Mario would be an obvious example. You
(47:09):
put the Mario card down on the table, you hold
up the three D S, and you aim the camera
at the card, and you look at the screen and
there's Mario, and Mario appears to be jumping around on
your physical table. Now, obviously, if you look off of
the display, there's no Mario jumping around, but on the
display there he is, and it was pretty cute. I
(47:30):
remember being really impressed with this very simple implementation of
a R when we got our three DS, and then
I took our three DS apart and then I took
pictures of it, and then I posted on Twitter and
people got sad, it's a great day. In two thousand thirteen,
Google introduced Google Glass. That was the wearable that included
(47:53):
a small display position just above the right eye. Uh
so when you look straightforward, you could tell that there
was something kind of above your natural eyeline. But it
didn't get in the way too much. You too to
look at the screen. You actually had a glimpse, you know,
the glance upward, and then you could see what was
on the display. Google Glass had augmented reality features like crazy.
(48:19):
You could see video calls. You could actually use the
glasses to not just to take a video call, but
show the other person what you are looking at so
they could see from your point of view. You could
also overlay directions, so if you're walking down street, you
could glance up at the screen and it would tell
(48:39):
you if you need to keep going straight or turn laughter,
turn right, that kind of thing. It was really useful.
Um I had a pair of these Google Glass and
I really liked the direction they were going in. I
felt that it wasn't a fully realized product at the time,
and eventually Google agreed and after a couple of years,
they took Google Glass off the market and early and
(49:00):
now you can't get them anymore. Uh. They were clever,
but they were expensive, and they had some limitations. And
like I was saying earlier, you know, it's hard to
build all the components you need into one headset. So
Google Glass would communicate via Bluetooth to your smartphone, and
your smartphone would act as the actual nexus point to
(49:21):
the Internet. But it was a neat idea, uh, and
I enjoyed getting to use them while I did, so
I keep hoping to see a return of that kind
of technology, but perhaps in a more mature and less
expensive format. Now we've also seen applications similar to the
(49:42):
ones we mentioned earlier, the ones that are meant to
guide people into laying out or repairing a system. We've
seen that in the car world. Not too long ago,
there was the MARTA system introduced by Volkswagen. MARTA makes
me chuckle because that's also the name of Atlanta's public
transportation system, but in this case, it stands for Mobile
(50:03):
Augmented Reality Technical Assistance and it's specifically designed for mechanics
who are working on the XL one vehicle. So if
you hold up an iPad that has this app on it,
and the camera has pointed at an XCEL one and
you look at the display, you'll see information overlaid on
top of the car, including labels for all the different parts.
(50:25):
So let's say you're a mechanic and you have to
do a specific repair on this vehicle. You hold up
the iPad, you look through the display, and you see
exactly what you need to do. It gives you a
set of instructions that shows you how you need to do.
It tells you where you need to stand based upon
the angle of the view. So if you hold it
up and it says no, you need to move about
(50:46):
a foot to the right, you can do that. Then
hold up the iPad again. I'll say, all right, you're
in the right spot. Make sure you loosen this particular
bolt first. That kind of thing, and it's meant to
be an interactive maintenance guy in a way, maintenance and
repair guide. This is one of those applications of augmented reality.
I think is a no brainer to me. It's a
(51:09):
killer app The idea of having an ability to work
with something you are not familiar with, but you're able
to leverage the expertise of people who either designed it
or built it, or just fully understand it, and get
guidance based on their expertise in real time, so you're
(51:31):
not having to go and consult a an article about
it or watch a YouTube video. You get step by
step instructions overlaid on top of your view of that thing.
To me, that's the most compelling use of augmented reality
from a practical standpoint. There are a lot of other
(51:51):
uses that I'll talk about towards the end that I
think are also really super cool. So don't get me wrong,
It's not the only one. But let's move on to
thousand fifteen. That was when Microsoft would unveil the hollow lens,
something I still want to try out. I have not
had a chance to try a hollow lens yet. That
is a headset cable of advanced a R applications everything
(52:12):
from what I was just talking about, giving you guidance,
step by step instructions on how to do like a
repair job on say an electrical outlet. You can even
use a Skype system to call an expert who can
then view your point of view and interact with that
point of view. So Let's say I'm looking at the outlet.
(52:35):
The expert electrician i'm talking to can see what I see,
and he or she can also make notes on the
display which shows up in my field of view. So
he or she might circle a specific wire and say
you need to you need to remove that one first,
and I know I need to do that one first
(52:55):
because I can see which one they are talking about.
Or they might circle another wire and say, no matter
why you do, don't cut this wire where the toilet
upstairs will explode like lethal weapon too, and I won't
do that because you know that guys like three days
from retirement. So I have a heart. But no, this
(53:18):
is this is a really neat idea, having this interactive
ability to overlay the information from the world, the digital world,
onto your physical world and beyond that. The hollowlens has
lots of other functions. It's not just something to do,
you know, home repairs around the house. You can also
use it for entertainment purposes, like you could create a
(53:40):
screen that can show you video from various sources and
you can assign it a place on a wall in
your environment. Let's say that you're in your living room
and you just create a screen so you can watch Netflix,
and you slap it on a wall and it will
stay in that same position relative to your point of view.
(54:03):
So if you look to the left or right, the
screen stays where you put it as if it were
physically there on your wall. But keep in mind it's
just a virtual screen, and when you look back to
that part of your wall, you'll see the virtual screen
there playing whatever it was that you wanted to watch.
I think that's a super cool idea. And they've also
shown off games like a game of Minecraft that uses
(54:26):
hollow lens so you can actually view a Minecraft world
sitting appearing to sit at any rate on top of
a table, so you can walk around the table and
view the Minecraft world from various angles and play that way.
I think that's super neat. Don't know how compelling it is,
because again I haven't tried it myself, but I really
(54:47):
like the idea. This year two thousand sixteen, are got
another big boost from a little game called Pokemon Go.
Although I have to admit this was a really primitive,
base sick implementation of augmented reality. Really, it was not
much more than just a In fact, it was nothing
more than just an animated overlay that would exist on
(55:09):
top of the camera view of your of your device.
So I'd say, I'm holding up my smartphone and I'm
trying to catch a Jiggli Puff, and the Jiggli Puff
is currently bouncing up and down on the sidewalk in
front of me. That's about as far as the augmented
reality actual experience would go. So very primitive. But because
(55:29):
Pokemon Go became so popular so quickly, it really pushed
the concept of a R to the front of the
minds of people everywhere, including business owners who immediately said,
we need an augmented reality app. Whether they actually needed
one or not is beside the point. A lot of
people got into a R because of Pokemon Go. Uh,
(55:53):
for both good and bad. I always think that you
have to come up with the experience first. You have
to understand why you need to use a specific strategy
to create a specific experience and then build it. Not Hey,
we need augmented reality makes something that's a R. To me,
(56:13):
that's the backwards way of going about it. But what
do I know. I'm not not a programmer, so I'm
sure the programmers feel in a similar way to me.
But that's just a guess. Now. The future of a
R depends heavily upon the applications we see, in which
ones end up being successful, in which ones aren't. Right now,
I would say that the best bet is to see
(56:36):
more a R features built into smartphones and tablets. Uh not,
maybe not necessarily built into them, but have apps available
that create a OUR experiences for very specific contexts, Like
let's say it's a museum app. You might download a
museum app on your phone, and when you go to
the museum and you use your phone, you can get
(56:57):
more information about the paintings and sculptures and or installations
that you see in the museum. That's an easy one
to understand. But that same app isn't going to be
useful once you leave the museum, you no longer have
the context that it is tied to. I think that
smartphones are probably going to be where the greatest development
is going to be in the near term, because wearables
(57:19):
is still really hard to do. We still don't have
a consumer version of the hollow lens out available for
anyone to purchase, and it may never come out as
a consumer product. Microsoft hasn't shown a whole lot of
interest in making it a consumer product. Maybe that will change,
but at the moment, I wouldn't hold my breath, so
I would argue smartphones and tablets are pretty much where
(57:42):
it's at. Maybe some implementation with some existing VR headsets
which have external cameras mounted on them as well, like
forward facing cameras, you could build a our experiences there.
Then it gets a little weird because you're you're also
you know, you're looking at a monitor, so you're looking
at a video feed of your roundings, and on top
of the video feed you get the overlay. Same thing
(58:03):
is true for your smartphones and tablets, by the way,
but different that from the Google Glass implementation, where you're
looking at the actual physical world, not a video representation
of it, but the real world. And then because the
display itself that you are looking through is transparent, you're
looking at a transparent overlay of digital information that gives
you more info about the world you are in. I
(58:27):
think a R is super cool. I think it's really
got a lot of potential to change the world around
us and to change the way we interact with the
world around us. You could imagine a dystopian future implementation
of a r where we all have to wear glasses
and we're constantly getting personalized commercials beamed at us whenever
(58:48):
we look at anything. Like imagine walking past a store
casually looking in the window and then getting a whole
bunch of ads for all the stuff that's in the
store window. That would be obnoxious, and it's easy to
understand how people would not want that, yet also easy
to understand how that could possibly become a future or
think of the future where your privacy is no longer
(59:10):
even relevant, and you walk down the street and you
look at all the people's faces who are also walking
down the street, and you're getting names of everybody and
what they like and what they dislike. You know, what
music they tend to listen to, maybe what they're listening
to right now, And it's all because we've got facial
recognition technology. Almost everyone has some sort of social media presence,
(59:34):
so you can map that face to any public profiles,
try and find a match. If you found a match,
you could bring back information to the person wearing the glasses,
so I can look at somebody and say, oh, this, uh,
this cute kid over here, she's got she likes punk
rock music. I'm gonna I'm gonna impress her with my
(59:54):
knowledge of the cramps. That probably wouldn't work. But the
point being, this is pretty creepy and invasive, and so
there are some negative implementations of a ARE that we
have to watch out for. Unless we get to a
point where we just don't care about privacy at all anymore.
Some would argue we're already there, and in that case,
this implementation of a R may not sound creepy at all.
(01:00:16):
It might just sound kind of cool, kind of the
equivalent of walking into a store, seeing a person with
a name tag and addressing them by name. If they
don't remember they have a name tag on, they have
this moment where they think, do I know you? But
if we're in a world where everyone can see everyone's
name all the time, then well, for one thing, I
won't ever have to worry about coming at a loss
when I have to introduce my wife to someone. So
(01:00:37):
that's a that's a plus side. And I hope you
guys enjoyed that classic episode of text stuff. If you
have any comments or questions, reach out on social media.
We are available at text Stuff h s W both
on Twitter and on Facebook. You can also go to
our podcast page that's tech Stuff podcast dot com. You
can find a link to every episode we've ever recorded. There.
(01:00:59):
You'll also find a link to our online store, where
there's lots of tech Stuff merchandise, so you can go
ahead and spend all that hard earned cash on really
cool tech stuff products. I mean, seriously, we do have
some really neat ones. If you haven't checked it out,
go at least take a look at the designs. I
think you'll find them amusing, and who knows, maybe you'll
find a new favorite tote bag or t shirt. Then remember,
(01:01:21):
every purchase you make goes to help the show, and
we greatly appreciate it, and I will talk to you
again really soon. Tex Stuff is a production of I
Heart Radio's How Stuff Works. For more podcasts from my
heart Radio, visit the i heart Radio app, Apple Podcasts,
or wherever you listen to your favorite shows.