Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:04):
Welcome to tech Stuff, a production from iHeartRadio. Hey there,
and welcome to tech Stuff. I'm your host, Jonathan Strickland.
I'm an executive producer at iHeart Podcasts and how the
tech are you. So it is Friday, It's time for
a classic episode. The episode you're about to hear, originally
(00:26):
published on May twelfth, twenty seventeen, is called the Google
Glass Story. Hope you enjoy. I'm going to talk about
a subject that I've touched on in past episodes. In fact,
we did a full episode about this subject. I'm talking
about Google Glass. And while we did do an older
episode about this, I felt like we could really use
(00:49):
a chance to revisit this and kind of talk about
in the more modern style of this show. You might
remember the old version of tech Stuff was more conversational,
and this one's less, so it's more narrative. Well, Google
Glass was a big augmented reality project or an AR project,
out of Google's R and D labs, the Google X Labs,
(01:10):
that's the super secret research and development branch of Google.
And it probably was a little bit ahead of its time.
In fact, it really was ahead of its time, and
it was purposefully ahead of its time. That was part
of the problem, but I'm getting ahead of myself. It
might have suffered from some poor design and implementation, or
at least some choices that perhaps puzzled people. I wouldn't
(01:33):
go so far as to call them poor myself. I
actually really liked Google Glass. Maybe it was just too
darned expensive. It's pretty costly, but whatever the reason, and
we'll explore all of them. It's no longer a consumer
product that you can buy from Google. So today we're
going to talk about Project Glass, how it got started,
the technology behind it, and the failed experiment to turn
(01:56):
it into a consumer product, and where is it now.
The reason I decided to cover this is because the
week I'm recording this episode, The Telegraph ran an article
about some of the biggest flops in technology, and some
of the usual suspects were in there, like the Nintendo
Virtual Boy was one of the major flops listed. There
(02:16):
are also some heartbreaking intrease in there. The Sega Dreamcast,
for example. I know that's another video game component there,
a console. In this case, it was legitimately a really
good video game console. It just didn't do very well
in the market. But I still have one and it
was really really good. Then there was Google Glass. That
(02:37):
was also one of the ones that the Telegraph listed
another heartbreaker. So what's the story behind it? Well, I
can't talk about Google Glass without first talking about augmented reality,
can I? You know me? You know I can't. I
am physically incapable of doing that. I'm sure many of
you are familiar with the concept of augmented reality or AR,
(02:59):
but just in case, I'm going to give you a
quick refresher. So Thomas Cottle gets the credit for coining
the phrase augmented reality in nineteen ninety. He was specifically
talking about a system that would allow you to see
where wires needed to be laid out in say, the
fuselage of an airplane. So if you've ever been able
(03:19):
to walk into an airplane where it's been stripped down
so you can see the guts of it. I was
recently at an airplane museum where I got to do this.
I walk through what had been a former seven forty
seven and now included a section where everything got stripped
away so you could see essentially the bones the skeleton
of the seven forty seven, and part of that included
(03:42):
conduits through which wires would run well. When you're building
one of these airplanes, you need to know where those
wire conduits have to be so that you can make
sure they fit into the overall design. Coddle worked on
systems that would give a digital overlay as engineers would
look at this airplane design so that they could lay
(04:02):
the cables the proper way, make sure that they were
aligned so that they weren't going to end up messing
up the design some other way. And really, augmented reality
is just the integration of digital information into the real
world around us. And you can do this in lots
of different ways, but typically we talk about augmented reality
(04:24):
in terms of overlaying some sort of visual digital information,
like a digital display through which you can see a
view of the real world, and thus you can have
some augmented information on top of that view of reality.
So a good example of this might be a fighter
pilot who has a helmet that includes a digital heads
(04:45):
up display or HUD on the visor of the helmet itself,
so you might be able to visualize things like Allied
aircraft and you'd be able to see it and it
would be identified within your visor, so that you get
information about that. That's a simple example, simple in the
sense that you can easily imagine it. It's actually quite
(05:08):
complicated when you build the technology. A few high end
cars have similar features that give digitally enhanced information that
is projected or otherwise displayed on the windshield itself, which
gives the driver the ability to see certain special views.
You might be able to see, like an infrared view
of what is outside the car, or a projection of
(05:28):
things like the outside temperature, that sort of stuff. But
it tends to be in those luxury cars. You don't
see it in a lot of the lower production cars,
even the nicer ones. It's pretty rare. You also have,
I'm guessing an augmented reality device on you. More likely
(05:50):
than not most of you, I'm sure have smartphones. So
if you have a smartphone, you have a device that
is probably capable of running augmented reality apps. There are
tons of different apps in the AR space for all
of the major smartphone platforms, iOS and Android being the
two big ones, but there are ones for other ones
as well, and typically the way these work is that
(06:11):
you hold your phone up, your phone's backfacing camera pulls
in a view of the world around you. So it's
like you're getting a live view. It's like your phone
is a monitor for you to look at a live
camera feed of the world around you. But then the
app allows other digital information to be overlaid on top
of that view. So it might be something as simple
(06:32):
as directions of where to go. Say you've programmed your
phone saying I want to get to this one particular
coffee shop and it's about six blocks away, and you
hold your phone up and it tells you, all right,
you need to go three blocks straight ahead. Then you're
going to take a rite. That's a pretty simple again
concept for augmented reality, but it could do other stuff too,
(06:55):
Like you might be able to hold it up to
a sign that's written in a language you can't read,
and it might be able to read it and translate
it for you. We're seeing a lot of translate apps
that incorporate this kind of augmented reality, and it's pretty awesome,
at least in my opinion, is pretty awesome. But there
are tons of different ways to implement augmented reality and
(07:15):
just as many different use cases for AR, and it
ranges from just entertainment to productivity to industrial use and
beyond medicine, lots of different potential uses for AR now.
Google Glass started as a head mounted computer device that
would incorporate augmented reality features into a display that you
(07:38):
could look through and still see the world beyond. And
while the project was first teased in a video in
twenty twelve, the origins for this project go back much
further than that. And first, before there was even a Google,
there was a guy named Thad Starner. I've actually had
the good fortune to meet mister Starner because he works
(08:01):
at Georgia Tech. He's a professor there, and I visited
Georgia Tech. I visited the wearable computing labs, and I've
chatted with him. He's an interesting dude, and Georgia Tech's
just down the street from how stuff works, so it's
not hard for us to get over there. Well. He
also served as a technical lead for Google Glass. In fact,
I believe he served as technical lead longer than anyone
(08:24):
else in that position for that particular project. And it
was Starner's work in wearable technology and augmented reality that
you could say got the ball rolling. So you remember
I mentioned a guy named Coddle got the credit for
coining the phrase augmented reality. Well, Starner says that he
really coined that phrase independently of Coddle. Also in nineteen
(08:48):
ninety through a fellowship proposal, he wrote, and he thinks
that that might be the first appearance of augmented reality.
He wanted to use the phrase artificial reality when he
was originally talking about this concept. However, someone else was
already using that term. That someone was Timothy Leary, and
if you don't know who Timothy Leary is, you should
(09:09):
look that up. Leary was specifically talking about the kind
of reality you experience when you have partaken in mind
altering substances. So it wasn't exactly the context that Starner
wanted to imply when he was talking about his technology,
so he didn't adopt artificial reality, and he went with
augmented reality instead. In nineteen ninety three, which was years
(09:34):
before Google ever even existed, Starner created a wearable display.
So it was a display that attached to his glasses,
and really he worked with other people to design this.
He didn't build it by himself with his own two hands,
but he worked with designers to create a computer system
that incorporated a display that could clip onto his glasses,
(09:56):
and it was pretty big and bulky, and it was
not transparent. It was like having a small computer monitor
mounted on your glasses. And he started to wear this
all the time wherever he went. It was kind of
an experiment in finding out how you might use wearable computers,
(10:17):
what elements are important, which ones are not important, how
do you incorporate it into your daily life. So he
would wear this all the time, and it very often
would prompt people to ask questions about this weird thing
that he had on hims on his face mainly. Eventually
there was a build of this that started going by
(10:39):
a name called the Lizzy. It was a wearable computer,
complete with input devices and with that head Melton display
that again was kind of unsightly originally anyway, This display
was able to show a resolution of seven hundred and
twenty by two hundred and eighty pixels, so tiny resolution
(11:00):
compared to what you would find today, the very low
resolution compared to today's screens. It was also monochromatic, so
only one color. And connected to that was a one
handed keyboard called a twiddler, which I wish were a joke,
but it is not. So it was a little one
handed twiddler keyboard. It kind of reminds me of the
(11:23):
keypads on old cell phones where if you wanted to
send a text message and you had to hit the
letter E, you had to press the def key twice
in a row in order for you to select E
before you moved on to your next letter. According to Starner,
he could hit type up to one hundred and thirty
words per minute using this one handed keyboard method, which
(11:44):
is crazy fast. It's crazy fast even if you're using
two hands on a regular keyboard. So I was really
impressed to hear about that very large and heavy batteries
would give the computer more than ten hours of juice
before you need to recharge. But I mean, these batteries
weighed several pounds at least, so they weren't little, unobtrusive
(12:10):
things attached to the computer. It was like wearing a
hefty backpack, so it wasn't exactly comfortable to wear all
the time. And the computer was really serving as a platform,
a development platform. It was the basis for programmers to
design wearable computing applications. So it's not like it magically
(12:31):
gave Starner some sort of crazy computing ability. It was
rather an early wearable device to help explore the applications
and implementations of wearables without having to worry about creating
a specific product. So this was really exploratory, and when
you think about it, it was meant to say, we
think there's something here, but we don't exactly know what
(12:53):
it is yet. We don't know what the final form
is going to be, So I don't have an idea
for something that's going to be in a package on
a store shelf that other people can go out and buy. Rather,
I think computing has the capability of transforming our daily
lives in yet another way. Now keep in mind, this
isn't the early nineteen nineties. This is before smartphones, so
(13:17):
smartphones would go on to show us that, yes, indeed,
having a computer device that you can carry around with
you changes things. There's no question of that. The Internet
itself is a reflection of how much that has changed.
Internet advertising has changed dramatically because of mobile devices and
the popularity of them. So wearable computers definitely had the
(13:40):
potential to make an enormous impact, but we had to
figure out what sort of implementation made sense and what
kind of applications would you use it? For because just
to have something, just to have it that doesn't remain
compelling for very long. That would be part of the
problem of Google Glass later spoiler alone. Now. One of
(14:02):
those applications that Starner created early on was a program
that would keep track of everything he said, whether or
not it was straight voice to text. I am not
entirely certain. I'm sure by some point it really was.
But he would also be able to type things in
using the twiddler, which I remind you it's not a joke.
(14:25):
So it was kind of like a perpetual notes system.
He could keep track of things he was thinking about
and talking about, and then he could run a search
back against that see if he could find anything interesting
later on, and he said that about ninety five percent
of the time what he got back was garbage, wasn't
particularly interesting, insightful, or helpful. But about five percent of
the time it was totally the opposite. It was something
(14:48):
really worth knowing and remembering. So he thought of it
as something like a memory booster. It wasn't replacing his memory,
but augmenting it. Now, according to popular accounts, Starner attended
a conference in the late nineties and he met two
postgraduate students from Stanford, so this would be about nineteen
(15:09):
ninety eight. Those two postgraduate students were named Saragaie Brenn
and Larry Page, and if you're familiar with those two,
you know those are the co founders or Google. In fact,
they would found the company Google just a few short
months after this conference. Now, Brin and Page thought that
Starner's wearable computers were fascinating, and he gave them a
(15:31):
full demonstration of what the computer was capable of doing. However,
at the time, they were already focusing on a way
to improve search engines on the web, so they couldn't
really dedicate any attention to wearable computers. They were too
busy perfecting their search engine approach. And this was important
because back in the day before things like Google, the
(15:54):
algorithm that got so sophisticated that it was able to
ignore a lot of the tricks people were using to
try and direct traffic to their sites. Web search engines
were just okay. Most of them were looking for instances
of keywords appearing on a page, which meant that people
would try and fool search engines by inserting as many
different key search terms as possible at the bottom of
(16:17):
a page, even if the page had nothing to do
with that particular concept. And why would they do this
Because web advertising is based off of how many views
you get on a page. Page views are king at
least most web advertising is not all of it, but
a lot of it is. So if you were able
to direct a lot of traffic to your web page,
(16:38):
that was another page view, it didn't matter if what
that people founded that by the time they got there,
if it wasn't what they were looking for, and if
they bounced, you got the page view. You don't care
where they go afterward. Well, Google one of its big
missions was to create a better search engine that would
ignore all of the gamification of SEO and to try
(17:00):
and look for the links that are the best representation
of whatever it is you're searching for. So that's what
their real focus was on. However, this meeting between Starner
and Brennan Page may have been the first seed for
Google Glass, and it happened in nineteen ninety eight, before
Google was an official company. Now flash forward a decade,
(17:24):
it's two thousand and eight. Google by this time is
a huge company, incredibly successful, and it had just launched
its own smartphone operating system called Android. Starner immediately thought
that Android had promise as more than just an operating
system for a smartphone. It could be ported over to
(17:47):
all sorts of different devices and used for lots of
different stuff, particularly things that were small and nimble, stuff
that wasn't like your traditional laptop or desktop computer that
includes wearables. So he tried to get in touch with
Brennan Page. However, the phone number he had from ten
(18:08):
years previous was no longer going to either of them.
Big surprise, right, and he kind of let lie for
another couple of years. In twenty ten, he would write
an email to Saragai Brenn and he said, you should
probably come out to my lab at Georgia Tech and
take a look at the wearables and see where we've
(18:29):
gone in the last ten years. I think he'd be
really interested. And Brennan said, you know, we're thinking about wearables.
We think it's about time. And rather than us go there,
how about we fly you and your team out here
so you can give us a full day of demonstrations.
And Starner said yes, and next thing he knew, he
became a technical lead for Project glass Over at Google's
X division. By Shortly thereafter, after Starner joined the team,
(18:55):
Starner Brennan Page were able to bring over another leader
in wearable tech, someone that Starner had worked with in
the past, Greg Priest Dorman, who had pioneered work with
biofeedback systems in the seventies and into the eighties and
then moved into wearable computing. He's another huge name in wearables,
so he also joined the Google Glass development team. On
(19:18):
August eighteenth, twenty eleven, four Google employees filed a patent
for what was called a wearable device with input and
output structures. It's the title of a patent, and like
most patent titles, it's a little dry and it seems
pretty nondescript. The illustrations accompanying the patent include one that
closely resembles what the final form of Google Glass turned
(19:41):
out to be. Some of the other illustrations looked more
like a more or less normal pair of glasses. Some
of them looked a little strange or odd. The patent
described the general components for the invention, which you know
patents are supposed to do. In order for you to
get a patent on something, you have to actually explain
how the thing works the parts that are that it's
(20:01):
made out of. Otherwise you can't You're not supposed to
get a patent on it. So what did this cover? Well,
I suppose I can quote directly from the patent itself
under the claims section. The first and most important of
the claims reads thus late an electronic device comprising a
(20:24):
frame configured to be worn on the head of a user.
The frame including a bridge configured to be supported on
the nose of the user, a brow portion coupled to
and extending away from the bridge to a first end
remote therefrom and configured to be positioned over a first
side of a brow of the user, and a first
(20:45):
arm having a first end coupled to the first end
of the brow portion and extending to a free end,
the first arm being configured to be positioned over a
first temple of the user, with the free end disposed
near a first ear of the user, wherein the bridge
is adjustable for selective positioning of the brow portion relative
(21:05):
to an eye of the user. A generally transparent display
means for a fixing the display to the frame such
that the display is movable with respect to the frame
through rotation about a first axis that extends parallel to
the first brow portion, and an input device a fixed
to the frame and configured for receiving from the user.
And input associated with a function wherein information related to
(21:29):
the function is presentable on the display. What the what? Well,
I'm going to explain what that means, but first let's
take a quick break to thank our sponsor. All right,
(21:49):
So what did that patent speak actually mean? Because it
got so weird and dry, and that's typical for patents,
by the way. That's it's a very formulaic approach because
patent system is a bureaucracy and you have to conform
to the methodology of the bureaucracy in order to get
(22:12):
your idea through Otherwise, if it doesn't conform, you're not
likely to get a patent award. So what it actually
means is the invention would fit on your face like
a pair of glasses that one arm or stem if
you prefer, of these glasses would curve around the brow
above one eye to rest against the side of your head,
(22:35):
so over one eye, and in early Google glass it
was always the right eye. There'd be a little protrusion.
That's where at the end of that protrusion. That's where
the screen would be, the clear transparent screen. I guess
I can say clear and transparent like I can say
ATM machine. Then the other part of it would wrap
(22:57):
around back behind your ear, with a part of it
resting just behind your ear. More on that than just
a second, but that would be the main part of
Google Glass. The other section would just be a dumb
frame that exists really just to provide stability. So you
would have a part that would rest on the bridge
of your nose that would help support the stem I
(23:21):
was talking about. And on the other side your second ear,
because you remember the patent said first ear and first etc.
Et cetera. First brow. The other one would be the
second one. This is not important to the invention because
nothing of any technical import is going on on that side.
It's literally just a frame to hold the rest of
(23:43):
the technology in place. So over your right ear you
have the stem of the Google Glass, and in that
section you've got a control area. You've got your battery,
you've got your speaker, you've got your projector for the screen,
you've got inside, you've got the processor, you've got the
receiver for Bluetooth et cetera. On the other side, over
(24:04):
your left ear, it's just a regular little stem that
fits over your ear, kind of like a regular pair
of glasses. There's nothing, no technical elements inside that side
of it. It's just dumb plastic. So the part that's
right behind your ear, that's where the speaker was, that's
(24:26):
where the audio would come from. Obviously, if you're watching
a video or you're taking a call, you would need
to be able to hear things through the glasses. It's
cool because this particular speaker wasn't just playing audio blasting
it out into the real world. It was using bone
conduction to transmit audio so that you could hear it. Now,
(24:46):
if you stood close to someone who was wearing google
glass and they were playing audio, you would hear stuff.
Because when you get down to it, sound is vibration.
The sound that you are hearing right now is transmitted
through molecules vibrating at the frequency that I'm talking at,
and the amplification as well. The amplitude, So when you
(25:08):
get down to the fact that sound is just vibration,
bone conduction makes sense. It is what it sounds like.
Sound is transmitting through bone. Your bone conducts sound bones,
I should say, although in this particular case we're talking
about the skull, otherwise known in medical circles as the headbone,
So the headbone would transmit the sound from the speaker.
(25:31):
So what's happening is that? Well, I guess it helps
if we talk about just the regular sense of hearing first, right,
These vibrating air molecules enter your ear canal. They cause
your ear drum or your tympanic membrane if you prefer
to vibrate. Now, that vibration gets transmitted to a tiny
set of bones in your inner ear, which acts as
(25:54):
kind of like an amplifier, and they ultimately terminate on
an organ called the cochlea, which is sort of in
a spiral shell shape. It's got some fluid in it,
and it's got some finger like nerve endings in it
that are in this fluid. And when the coclia is vibrated,
then this fluid moves around and that stimulates the nerve
(26:16):
endings inside of it, which then sends signals to the
brain which then interpret that to be sound. It's pretty
cool when you really develop thought about it. Well, the
neat thing about bone conduction is you can bypass the
ear drum entirely. You can send vibrations through bone, which
then will reach the inner ear on their own bypassing
(26:37):
that pathway that all other sound tends to take and
vibrating the cochlea directly. So there are a lot of
sports earbuds or headbands or whatever that have these kind
of bone conduction speakers. Some people call them bone phones.
I do not, but some people do. And the Google
(27:00):
Lass had one of these types of speakers to again
transmit sound without it blasting out into the general world.
Pretty interesting stuff. And this meant that you could watch
things like videos, or listen to a voicemail or make
a call using Google Glass, and you could hear through
Google Glass. You didn't have to hold a phone up
to your other ear. Now, the invention also incorporated a
(27:25):
screen that was transparent, so, in other words, you had
to be able to see through the screen and be
able to view the outside world through it. It couldn't
have an opaque backing, so you couldn't silver the back
of the screen, which presents some challenges. So how do
you project images so that you can see a display
(27:45):
and read the digital information? Because again that's what augmented
reality is all about, right, it's overlaying that digital information
on top of a view of the actual world around you.
So if you can see through the screen, how do
you display images on that said screen? Well, Claim nine
on that same patent gives some information on that matter.
(28:06):
It says the electronic device of Claim one wherein the
generally transparent display is a prism of a transparent material
configured to make an image projected into a side of
the prism visible at a surface of the prism that
is at a non zero angle to the side of
the prism. Now what that means is that on casual glance,
(28:29):
the prism looks like an elongated cube of clear plastic,
and it looks like that's all it is. But if
you were to take a look at it from the top,
like the top of a pair of Google glass, and
you were looking down, you would see that there's a
fine line bisecting from one corner of this prism down
(28:52):
into about the middle of the prism, a this weird
little diagonal. This is an angle layer inside the prism,
and the way it works is it allows light to
pass through straight from ahead of you. So if you're
wearing the pair of Google Glass and you're looking at something,
light can pass straight through the prism, no problem. But
(29:14):
if light were to come from a right angle to
the right, as in a place where a little projector
in the Google Glass could project out images, it would
then redirect that light ninety degrees so that it goes
into your eyes. In other words, you can still see
the world in front of you because that light can
(29:35):
pass through unimpeded. Light coming from the right side, which
is where the little digital projector is, that gets reflected
into your eyes, so then you can see the digital
information overlaid on top of the world around you. So
think of its kind of like a mirror, but the
mirror only works if light is coming at it from
(29:56):
a very specific angle. It's, you know, in a way,
like one of those two way mirrors where you can
see through one way but not in the other. It's
kind of similar to that. So I thought that was
super neat. It was actually the most fascinating part of
Google Glass when I was first learning about it way
back in twenty twelve, when I was trying to learn
(30:19):
how it worked so I could write an article about it,
and in fact I did write at least a version
of how Google Glass works. I haven't looked on the
website to see if that's still my version, because we
update these articles on occasion and sometimes that changes the
authorship as well. But yeah, way back in the day,
I wrote an article about how Google Glass works, and
part of it was just learning about this prism and
(30:40):
I thought it was fascinating. In addition, the design of
this prism meant that it was slightly above your normal
line of sight. So this is that part of the
patent that talked about it being offset from your eye.
It's it's mounted a bit above where your natural line
(31:03):
of sight would be, so you would always have it
in your field of view, but it wouldn't necessarily be
in focus unless you were to glance upward so that
your right eye would be looking at the prism directly.
This was a specific design implemented by Google in order
to avoid things like people getting distracted by digital images
(31:23):
when they should be paying to the world around them,
and to make Google Glass something that you reference rather
than something that replaces your view of the world around you.
Very important if you're doing something like I don't know,
walking or driving or riding a bicycle, anything where you're
moving through a space. You want to be able to
make sure that you're not having your attention divided so
(31:47):
that you end up doing something stupid, like walking into
an open manhole or into a telephone pole. I have
done that before, for Realsi's but that was because I
was looking at a phone. I've been that guy. I
don't look at my phone when I'm crossing streets, but
I look at it when I'm walking down sidewalks, and
I have paid that price on multiple occasions. You think
(32:08):
I've learned by now, and I guess I have. I've
learned how to replicate that experience almost perfectly. The patent
also included an important component in Google Glasses control system,
which was a capacitive touch bar along that one stem,
the active stem of Google Glass. And I've talked about
capacitive touch before, but let's quickly go over that so
(32:31):
that we understand what I'm talking about. So with a
capacitive touch surface, you have the makings of a circuit, right.
All that remains is you have to have something to
close the circuit. You have to have something conductive make
contact with a capacitive surface so that a complete circuit
can form and when a complete circuit does form, there's
a voltage drop at the location of the touch. So
(32:54):
if I have a giant touch screen in front of me,
and it's a capacitive touch screen, and I reach down
with one finger and make contact with that screen, I
complete a circuit, and there's a voltage drop at the
point where I touched it. Software on the device is
able to interpret that as being an actual command of
some sort, and it might be selecting a program. It
(33:16):
might mean turning volume up or down. It might mean
swiping right because that dude you're looking at on tender
is pretty hot and you kind of want to know
what's his deal man, so you want to swipe right
on that. Lots of different applications. There are also resistive
touch screens that work on a slightly different principle. Those
(33:37):
are the ones that you have to actually use pressure
when you're making contact because it's not enough to touch it.
You actually have to press so that you're slightly deforming
the layers in order to create a circuit within the
display itself. But that's a totally different type of touch screen. Now,
in the case of Google Glass, the capacity panel is
(34:00):
a means to cycle through the various features that are
on the glasses. It's also a way to execute other
basic commands, so Google would go a step further than this.
You wouldn't have just touch commands. That was just one
way you could interact with Google Glass. They also incorporated
some motion controls and voice commands. So, for example, if
(34:21):
you wanted to turn your screen on and you wanted
to wake up Google Glass from its power saving mode,
you could choose an option where by tilting your head
backward at a fairly good rate, it would wake up Google.
It's kind of like saying sup to someone. We'll be
back with more about the Google Glass story in just
(34:41):
a moment. If you want to take a photo, you
could start with the phrase okay Google, which probably sounds
familiar to you if you have an Android device or
a Google Home and if I just activated it, I
(35:02):
am sorry. You could then follow up with take a picture,
so you could give a voice command to your Google
Glass to take photos. I can speak from experience that
people thought that this was the coolest thing ever when
Google Glass was brand new and only a few pairs
were out in the wild. In September of twenty twelve,
after I had a pair of Google Glass, and yes
(35:24):
I did have a pair of them. Technically, the company
purchased them. I was just the representative who got to
be the one to use them. When I had them,
I took them to dragon Con, which is a big
science fiction and fantasy convention here in Atlanta, and I
would ask people, may I take your picture? Because there
are a lot of people in costume, a lot of
great cosplay at dragon Con, and most of the time
(35:46):
people say yes absolutely. That's why I got dressed up.
And then I would say okay, Google, take a picture,
and they would look at me confused because they had
never seen Google Glass do that before. And then the
little light would flash on the Google Glass because that
would indicate that a photo had been taken. Google wanted
to have a physical indicator to let people know when
(36:07):
a camera was active. And then immediately they would freak
out about the fact that I was wearing the future
on my faith, and I would invariably have to take
another photo because the first one would mostly be if
you've ever seen a dog, look confused. That's what almost
all of the first photos look like from that Dragon Con.
Because I was talking to a pair of glasses and
(36:28):
they thought I was talking to them and they were wondering,
why am I saying this. These days, everyone would totally
understand what was going on. At the time, it was
pretty new. Now, to avoid making the headset too heavy
or too hot, the designers decided that Google Glass would
have to be a peripheral piece of technology, so in
other words, they didn't want it to be loaded down
(36:49):
with processors and with various connectivity chips like cellular connectivity,
Wi Fi, all of that kind of stuff. So you
would co pair it to another device, like a smartphone,
for example, and the smartphone would act as your link
between the glasses and the outside world. For the most part.
(37:12):
You could also do Wi Fi, I think, but obviously
WiFi is not pervasive, so if you're anywhere where there's
not Wi Fi, you would need to have almost like
a modem, and your smartphone would act as your modem
with a Bluetooth connection allowing for the communication between your
glasses and the phone. Now, that simplified the components that
needed to go into the glasses themselves, and that allowed
(37:32):
the designers to make it a little bit lighter and
not worry about so much heat generation. It also meant
that there was less of a drain on the battery,
which was perhaps the biggest challenge for the team was
designing a battery that would be light enough and yet
powerful enough to do what they wanted it to do.
They needed it to not make the glasses uncomfortable to
(37:56):
wear or really unwearable for any length of time, but
they also wanted to make sure they had enough juice
to get a decent amount of use out of it
before you had to recharge it. They also got around
this by having Google Glass go into sleep mode pretty quickly,
so that anytime you were not actively using it, it
could conserve power, and that way you could get maybe
(38:17):
another hour out of it. It drained pretty fast back
when I got one. Earlier wearable computers would require those
massive batteries I talked about before, ones that weighed four
or five pounds. Now, you don't want something like that
on your glasses. It would just be excruciating. So this
(38:37):
was a huge challenge. It remains a huge challenge in
wearables to this day is figuring out the balance between
the features you want to include and how much can
a battery provide before you're going to have to start
recharging it. Every few minutes. While the paperwork was being
drawn up with the patents, Starner and others were in
Google's labs building this actual hardware and testing it constantly.
(39:01):
Whenever they were in the lab, they were wearing these devices.
Whenever they were doing their regular jobs, they were wearing
these They were testing them, they were trying them out,
they were adding new features, taking old ones out when
they didn't work, building apps just for their own use,
to kind of really test the limits of what this
technology could do. And then at the end of the
day they would put them up and go home because
(39:23):
they weren't ready for the world to see Google Glass yet.
They didn't want to go out and start using it
outside where people might wonder what the heck is going on,
and then next thing you know, everyone's talking about it.
So in twenty eleven, Google employees submitted the patent for
the headmounted display. The Patent Office would eventually grant that
(39:45):
patent on February twenty first, twenty thirteen, which actually is
not that long at all. That's pretty fast in many
cases for between a patent application and when it's actually granted.
A couple of years is nothing. But then go we'll
let the cat out of the bag. In between those
two dates, in April twenty twelve, Google released a concept video,
(40:08):
and this was a video that was shown from a
first person view, so you were in the shoes of
a guy wearing some sort of device that had something
to do with Google. And the reason why I'm vague
is because the video was pretty vague. It was just
giving you an idea of what this experience might one
day be like. Clearly, it was a head mounted display
of some sort with a heads up display element to it,
(40:32):
and it allowed this guy to send messages, to make
phone calls, to take images, to share live video with
someone else. So, in other words, the camera on this
device could feed video directly to somebody somewhere else in
the world, and you could even alert the person of
(40:52):
what was going on in the world around them. So
at one point, he's on his way to an appointment
and he wants to take the subway, and as he
gets close to the subway, a little alert points up
and it says there are delays at the subway station,
and he's like, aw shucks, and so he decides to
walk instead. So some of this stuff would get worked
into Google Glass. Some of it wouldn't be directly worked
(41:15):
into Google Glass, but they could fudge it through other means.
But it was an interesting video and it got a
lot of people talking. It generated a lot of buzz. Now,
you might think, like with the subway example, was that
a case of machine vision understanding that the subway the
(41:37):
guy is walking to is a particular one in a
particular system, and that that one is one that's being
affected by delays. That was not worked into Google Glass
because the camera cannot be perpetually on. If it were,
the battery would drain very very quickly. Instead, what you
could do is you could incorporate GPS from a smartphone
(42:02):
as well as a person's calendar if they've added an
appointment to their calendar, and if the phone makes a
quote unquote guess that you are on your way to
set appointment, and it notices you're getting closer to a
subway station because of your GPS coordinates, it could then
send you an alert saying, oh, we've got this message
saying that that particular subway station is experiencing delays. Now,
(42:25):
from your perspective, it's almost like Google Glass looked at
the subway and then said, hey, buddy, let's walk. You're
never going to get there if you try and get
on the train. But in reality, it could be stuff
that's happening on the background. The nice thing is it
doesn't really matter how it happens. What matters is the experience.
Does the experience make sense? Is it compelling? Because if
(42:48):
it is, it doesn't matter if it was because of
machine vision or if it was because of a combination
of GPS and calendar apps and other information. The result
is the same either way. Other than the fact that
in the way I just described, it's not draining your
Google Glass battery super fast by having the camera on
(43:09):
all the time. The video got a lot of attention.
People were excited about it. People wanted to know more,
but Google did not show off Google Glass at that
particular moment. They did show it off not too long
after that, and they did admit they were working on
an augmented reality project and they wanted to get feedback
(43:31):
from people about how they would use it and what
they thought about this concept. A couple of months later,
they held the Google Io Developer Conference. I've been to
one of these. They're very very nerdy. It is interesting.
It gets incredibly technical very fast, because it's meant for
(43:51):
people who are developing apps for Google platforms. The event
took place in San Francisco's Moscone Center, and high above
the center was a blimp with several skydivers in wingsuits
and parachutes, and each person in that group was wearing
a pair of Google Glass. And in the middle of
a totally different presentation, Sarah gay Brenn runs up on
(44:12):
stage and he says, I'm sorry to interrupt, but we've
got a time sensitive event going on right now, and
he switches to video showing the blimp above the Moscone Center.
Then they switch to a camera inside the blimp itself
and you see the team there, and you see that
each member of the team is wearing a pair of
Google Glass. They start a Google hangout using their Google Glass,
(44:33):
and then you can see the screens from each of
their perspectives. So now you're getting a first person view
of these people who are inside the blimp. And eventually,
once the blimp is in the right location, they make
a jump. They jump out of the blimp, they fall,
they launch their shoots, they land on the roof of
the Moscone Center. They hand over a package to a
(44:56):
person riding a mountain bike and he's also got Google
glass on. He rides over to the edge of the building,
switches over into repelling gear, repels down the side, gets
inside the Moscone Center, gets on a different mountain bike,
rides over into the conference room, down the aisle, straight
over to the stage, and brings Sergei brin a package
(45:19):
that has inside of it a fresh pair of Google glass.
It was a heck of a demo. I watched it live,
not in the Moscone Center. I was watching it remotely.
I was not at that particular Io event, and my
jaw was on the floor to see this display of
wearable technology in such a cool application. You know, this
(45:43):
Google hangout that's showing me what it's like to skydive
from a first person perspective. And then this journey from
the sea, the roof of the building down into the
center itself. It was an effective demonstration. Now, in August
twenty twelve, Google would receive a pattern. Now this was
not the same one for the head mounted display pattern
(46:06):
that one wouldn't be granted until twenty thirteen, but this
was a patent for a process and the title of
the pattern was unlocking a screen using eye tracking information.
So this would require having some sort of camera facing
back toward a person's eye, and it would look at
motions of the eye and interpret that as various commands,
(46:28):
including unlocking a device. So you may have had a
smartphone at some point that allowed you to create a
pattern by tracing something on a screen. An old Android
one I used to use had a grid of dots,
and you would connect dots in a particular pattern, set
that as your pattern of choice, and every time you
(46:49):
wanted to unlock the phone from that point forward, you
had to retrace that pattern. Well, you could do the
same thing with eye tracking if you had a virtual
display of dots and you stared at one dot until
the camera had picked up and acknowledged yes, you're looking
at the correct one, and then you just move your
eye from dot to dot, and this is like moving
(47:10):
your finger from dot to dot, and you could unlock
a screen in that way. That's just one particular approach
that you could use to unlock a screen using eye tracking.
This particular type of feature would not be included with
Google Glass, but it shows the kind of stuff they
were thinking about while they were putting all this together.
Yet Another pattern revealed what Google hoped to do with
(47:32):
Project Glass, and it used gestures. So these would be
hand gestures, actual hand gestures you would make in front
of your face lack a crazy person in order to
send commands to your Google Glass. So let's say you
see something on the street and you like it. It
could happen. Maybe you see a poster for a band
(47:55):
that you like and it's they're coming up with a
show nearby. Make a little heart shape with your hand. Awe,
so cute, you Millennials, And then this would end up
being picked up by the camera and interpreted as liking
the thing you are looking at. Maybe it posts to
a social media page saying, Jonathan likes the fact that
(48:17):
they might be giants is going to be in Atlanta
the Variety Playhouse playing a show, And yes, I would
like that. I love they might be giants and the
Variety Playhouse is a delightful venue. I don't know that
I would go so far as to make the little
heart shape of my hands, though I do have my limits.
But this shows another way of how Google was thinking
(48:38):
about interactions with its technology and ways that you could
control it beyond just using a capacitive touchscreen or even
voice commands. That also would not find its way into
Google Glass, but it might find its way into some
future Google product. Now, next we'll explore a bit with
Google Glass, and we will pay handsomely for it. But
(49:01):
before I jump into that, let's take another quick break
to thank our sponsor. In April twenty thirteen, Google would
open up an extremely limited program called Glass Explorers. So
(49:22):
I guess actually it was dragon Con twenty thirteen, not
dragon Con twenty twelve, which totally makes sense. I can't
tell those years apart dragon Con to dragon Con, they
all bleed together. But this program, the Glass Explorer's program,
was a pilot testing program. It was really a beta
test for the technology itself, and you had to apply
(49:43):
to be in it. You actually had to fill out
an online form to explain how you would use Google
Glass and why you think you would be a good
candidate for the Glass Explorer program. Not only that, but
you had to pay for it. You couldn't just get
a pair. They cost one thousand, five hundred dollars fifteen
(50:04):
hundred dollars for a pair of Google Glass. Now, the
application process was pretty simple, but it meant that Google
could very quickly go through this huge number of applications
and decide which ones sound like good choices and which
ones don't. Now, in my case, our company bought the
pair of Google Glass. I applied for them, but it
(50:24):
was a company purchase, so I don't actually own those
Google glass anymore, but I was able to buy some.
Now I'm going to tell you a funny little story
that happened to me. And this is all because Jonathan
doesn't pay enough attention when he fills out online forms.
So in the United States, there were a couple of
different places you could pick up a pair of these
(50:45):
glasses once you ordered them, and you had to physically
go to these locations. Google did not want to send
anything to anyone because they didn't want to deal with
the case of someone intercepting a shipment and stealing it
or selling it or breaking it apart or whatever. They
wanted to keep a tight rain on who actually got
(51:07):
their hands on these Google glass So as part of that,
you had to go to one of these physical locations
to pick up your pair. They did not have one
in Atlanta, which is where I live. They had them
in Los Angeles, in San Francisco, and New York. Out
of those, New York is the easiest for me. To
get to. New York is a couple of hours flight
from Atlanta. If I want to go to San Francisco
(51:29):
or Los Angeles, you're talking four or five hours each way.
So I decided I would pick up my Google Glass
in New York, and I filled out the form and
I indicated the day and location where I wanted to
pick it up. But then I realized something I had
double booked myself. Something else was happening the day that
I had chosen, and Google Glass would allow you to
(51:51):
change your delivery date one time only. That's it, so
if you messed up, you could change it once and
that's all you could do. So I go back on
there and I'm like, well, I can't take this day
that I thought I was gonna do. I'm going to
do this other day. So I chose a different day.
I did not realize, however, that by choosing a different day,
(52:13):
my choice of destination got reset to the default, which
was Los Angeles. So instead of rescheduling to fly to
New York, I rescheduled to fly to La So I
actually had to fly to Los Angeles go to Venice, California,
which is where Google has an office, and pick up
my pair of Google Glass. Bright side of it is
(52:37):
the day after I got my Google Glass, I got
to really try it out at the happiest place on
Earth because I went to Disneyland and I brought my
Google Glass with me. So funny, stupid story about Jonathan
not paying attention. Moral of the story is pay attention
on your online forms, especially if you have to resubmit anyway.
(52:58):
Back in those days, I still wore eyeglasses. I've had
laser eye surgery since then, but back when I got
the Google Glass, I still wore eyeglasses, and Google Glass
at that point had not been outfitted to work with
existing pairs of glasses. You could either wear Google Glass
or you could wear a pair of glasses, but you
(53:19):
couldn't really do both unless you worked for Google like
Sergey Brinn, and you could have someone make a custom
pair for you. So instead I would have to wear
contact lenses. This was one of those early complaints about
the Google Glass program. A lot of people wanted to
be able to attach the Google Glass part to an
existing pair of glasses, and there was just no way
(53:40):
you could do that In May twenty fourteen, Google opened
up the Explorer program and allowed more folks to join,
and there were talks of special Google Glass stores, including
floating barges, off the coast of places like San Francisco
and New York. And these were meant to be interactive
tech spaces and perhaps even a high end life luxury
showroom for Google Glass. But by June twenty fourteen, so
(54:04):
May twenty fourteen, this is all in the news, and
in fact, the barges have been kind of coming together
over the last couple of years, but in June twenty fourteen,
the plans for the barges were I guess scuttled is
a good word if we're talking about barges. The plans
were scuttled. The barges weren't scuttled immediately, but the plans were.
(54:24):
They never saw service, they never came online, they never
became stores. They did raise a lot of interest as
people watched them take form on a daily basis, but
they disappeared with about as much of a whimper as
could be. They were never really explained, and so they
kind of started to become something and then went away
before anyone could really figure out what exactly they were
(54:47):
going to turn into. In January twenty fifteen, Google pulled
the plug on the Explorer program because ultimately it had
been a failure, but not necessarily for the reasons you
might think. So let's rewind a little bit. It's twenty
(55:08):
twelve and Project Glass has a divided team. You got
two main camps of engineers who are disagreeing about a
fundamental aspect of Google Glass. One team thinks this should
be a persistent wearable device, meaning you put it on
and you wear it all day long. It's like a
fashion accessory, it belongs as part of your outfit. The
(55:32):
other team disagrees and says, no, this should be something
that you're using for specific use cases. So a situation
comes up when you would need Google Glass. That's when
you put it on, and then when that's over, you
take it off again. The two can't agree with which
direction they should go in. Saragey Brinn, who's excited about
the technology but impatient to wait and get all of
(55:54):
this susted out, has an idea. Instead of relying on
a relatively small group of engineers and developers, why not
create a beta testing program invite a wider group of
people to use it in the real world and take
that information as a way of developing it further. So,
in other words, you open this up, you get a
(56:15):
lot more information from a lot more people using it
in a lot of different situations, and you see where
it works and where it doesn't work, and you make changes.
If it works in one case, maybe you pursue that
a little more. If it doesn't work in another, maybe
you change gears or you try to figure out a
way to make it work in that situation. But the
point was opening it up to more people meant more information,
(56:39):
and more information meant that they could make better decisions.
And this was not a bad idea. It could teach
Google how people would use the technology and where they
should concentrate on building out features. It was really just
a larger part of the overall Google Glass experiment, only
there was a small problem. The unveiling of Google Glass
(57:01):
made it a prestige item. The way they showed it
off and the way they rolled it out meant that
it became an object of exclusivity. You had to apply
to be in the club for one thing, so people
were either in it or they weren't. You had to
pay fifteen hundred dollars for it, which meant that you
(57:21):
needed to have a pretty good amount of discretionary income
if you were going to buy something like this. Wearing
Google Glass didn't just say I'm interested in technology. It
also said I have the money to blow on something
that hasn't even become a product yet. And it also
said I'm in a club and you aren't. And on
(57:41):
top of that, people began to express concerns about privacy issues.
I mean, how could someone feel like they could maintain
any sort of sense of privacy if they're going out
in public and other people have cameras mounted on their
faces all the time. I mean, that's what Google glass was.
In part. It was an external camera pointing outward the
world from the perspective of the person wearing it. Now,
(58:03):
Google Glass had included a light to indicate whether the
camera was on or not, so that way people could
see if perhaps video was being recorded or streamed live
over the internet, or if a photo was being taken,
and if the light was off, then presumably it wasn't.
Now I would encounter strangers who would see me wearing
these glasses, and they thought it was the most amazing thing. Ever,
(58:27):
and the day I received my parent Los Angeles, it
was like I was being treated as a celebrity. People
were stopping me to ask about the Google Glass. They
wanted to learn more. They were fascinated by the technology.
They wanted me to take pictures of them using it.
But then I got home and I got around my friends.
My friends were different because I hung out with my
(58:48):
friends all the time. They knew me. But now I've
got a camera on my face pointed at them. And
more than once I had friends ask me if I
might take off the Google Glass while I had a
conversation with them, because they didn't feel comfortable with the
thought of a camera looking at them, even if the
camera was demonstrably off, if they knew for a fact
I wasn't using Google Glass, it was still uncomfortable. And
(59:10):
this was a problem that was widespread. It wasn't just
anecdotal in my case. So people who were in this
program began to be called a new name. The official
name was Glass Explorer, but the new name was Glasshole.
(59:31):
That hurt. The technology was not ready for a full
consumer product. It was never intended to be that. It
was meant to be a test bed for this technology.
And if you got involved in the program, and if
you were honest with yourself about what you were doing
and what the product was for, or what the Google
Glass was for, you should have been fine with that.
(59:54):
You knew from the get go that this was a
testing phase of a technology, not something that was ready
for prime time. But people began to get frustrated with
glasses limited utility. They'd say, like, after a couple of months,
they just stopped wearing them because really, I mean, you've
got a camera in your pocket already, why do you
need another one. They weren't putting Google Glass through more
(01:00:17):
and more uses, which meant that the program was becoming
decreasingly important. Over at Google. They were getting less data,
which is what they needed in the first place to
make Google Glass a successful consumer product. So there wasn't
just enough there there to keep people's interest, and the
(01:00:39):
wrong sort of folks had jumped into the program. And
by wrong sort of folks, I mean people who thought
it was a prestigious thing to be in this club,
who spent the money as a way of having something
that other people did not have, not as a way
of expanding the technology, but as a way of expanding
their own status among others, and so interest and support
(01:01:01):
began to wane, both inside and outside the company. But
Glass did not disappear. Lots of those features became incorporated
into other stuff like Android and Google Home. The phrase
okay Google is still used to activate Google's Assistant, which
can be on the phone, it could be on Google Home,
it can be on other devices. And I apologize if
once again I woke it up for those of you
(01:01:22):
who have Android devices nearby, and that can do a
lot of different stuff depending upon what Google Assistant is
running on. For example, at home, I can use it
to listen to different types of music or control my
lights on my phone. I can do all sorts of
different types of features. And you might even encounter Google
(01:01:43):
hardware Glass hardware in certain industries because, believe it or not,
Google Glass does still exist. Google still produces them and
still licenses it out, but only to other companies. It's
a business to business product now, it's not a consumer product.
This is a program I'm called Google at our Glass
at work, I should say Glass at work. So rather
(01:02:05):
than this consumer product, this is a device that gives
real world assistant to people working in various industries, and
it all depends upon the augmented reality apps that the
glasses are running. But it could be something like what
I was talking about before with mechanics, giving real time,
overlaid information on how to do a repair or how
(01:02:26):
to take apart an engine or how to build one.
It could be for medical use, it could be even
just for corporate use. There are a lot of reasons
that Google Glass could come in handy. So it is
still around. It's just not a prestige product for glass holes.
Now it's doing real work. The sad thing is, I
(01:02:49):
bet Google glasses progress would be much further along if
there had not been such a hoopla about it when
it was first launched. If it had been launched more
like we're trying to develop this technology and less like
a rock star, then maybe people wouldn't have gotten so
wrapped up in it, And maybe that means that Google
would have been able to get more helpful information from
(01:03:11):
people who are really using the technology the way it
was intended, and maybe we would see Google Glass much
further in development and maybe would even be a consumer product.
I'm sure it was discouraging to the Google Glass team
to see how things went awry, but it does still
exist and AR is still a thing. There are related
(01:03:35):
products that are also pushing augmented reality into a new era.
There's Microsoft's hollow lens that's a great example. There's Magic Leap,
about which we know very little bits and pieces of
information leak out over time, but we're always learning a
little bit more. But I'm always going to have a
soft spot in my heart for Google Glass. It's something
that I think if it had just had a little
(01:03:57):
bit of a lower key rollout, we'd be looking at
a very different world with wearables right now, or at
least a more advanced one. Hope you enjoyed that classic
episode about Google Glass. Obviously you could argue this is
another case of Google shelving a product after having some
(01:04:17):
issues getting it to actually, you know, gain traction. Whether
Google will ever make a huge jump back in the
mixed reality remains to be seen. Honestly, it kind of
remains to be seen if anyone can really get that
technology to a point where it has mainstream adoption. It's
such an expensive and difficult technology to master that it
(01:04:39):
may be a while before we see that. Apple certainly
is going to give it a shot, so I have
to keep our eyes on that. Hope you're all well
and I'll talk to you again really soon. Tech Stuff
is an iHeartRadio production. For more podcasts from iHeartRadio, visit
(01:05:00):
the iHeartRadio app, Apple Podcasts, or wherever you listen to
your favorite shows.