Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:04):
Welcome to tech Stuff, a production of I Heart Radios,
How Stuff Works. Hey there, and welcome to tech Stuff.
I'm your host, Jonathan Strickland. I'm an executive producer with
How Stuff Works and I Heart Radio and I love
all things tech and listener Robert Casey pinned me on
Twitter with a request that I do an episode about
(00:26):
touch screens and stylus is or STYLI Now. My original
co host, Chris Palette, and I covered this topic on
an ancient episode of tech Stuff that published all the
way back on October nineteen, two thousand nine, Holy Cow,
ten years ago. But I think it's past time to
(00:49):
revisit this topic and give it the full modern day
tech stuff treatment. Now, there are a few interesting things
about touch screens in general that I'd like to get
all the way. One is that it's pretty ubiquitous today.
It's it's it's a user interface that you find everywhere
for everything from mobile or mobile devices to lots of
(01:11):
different laptop and desktop displays. Now, granted, I've never owned
a laptop or desktop display that had touch screen technology
incorporated in it, largely because I didn't see a lot
of use in that Based on how I tend to
interact with computers, Not to say that there's no use
for it, just that the way I use computers, it
wouldn't make sense for me. Usually I don't have the
(01:32):
display so close that reaching out and touching it would
be terribly easy or comfortable, And most of what I
use computers for requires lots of typing, which isn't great
on most touch screen implementations. I guess you could pare
it with voice recognition and get more use out of it,
But I am curious if any of you guys use
computers with touch displays and what do you use them for?
(01:56):
As I'm sure there are plenty of use cases where
it is incredibly handy so to speak. Oh and there
will probably be a lot of unintentional puns in this episode,
and maybe a couple of intended ones will be but
a touch away, so to speak. Anyway, touch screens are everywhere,
but their history is fairly recent. Another thing I find
(02:19):
really interesting about them is that there are a lot
of different ways to go about it, and the end
result aims to be the same, but there are several
approaches to implementing touch screens, and each implementation has its
advantages and disadvantages, so we'll cover those in this episode.
And yet, one more thing I think is interesting is
(02:39):
really just how innovative touch screen devices have been. If
you look back at the science fiction films and TV
series from the nineteen fifties and even into the nineteen sixties,
when touch screen technology was first being described, you'll rarely
see examples of that idea. Touch Screens were such a
leap forward that spec lative fiction writers weren't really imagining
(03:03):
it as a user interface. That's why in series like
Star Trek, the original series, you'll see characters interacting with
physical dials and knobs and levers that are the controls
of a twenty third century spaceship. You know, you look
at those controls today and you think, oh, that looks antiquated.
Unlike a lot of technologies we've seen over the last
(03:25):
several decades, touch screens weren't heavily predicted in fiction. And
I think I'll have to do an episode dedicated to
tech that writers described years before it became a reality.
That's an interesting subject of its own, Like the types
of technology that science fiction writers predicted before it happened.
You know, things like you know, geosynchronous satellites, but that's
for another episode. Okay, so we're ready for our history lesson,
(03:49):
which is you longtime listeners know is sort of a
requirement in every tech stuff episode. Before I dive in,
I want to give a shout out to Florence Ion's
article from Touch to Plays to the Surface, A brief
history of touch screen technology in Ours Technica. It's a
fantastic summary of the evolution of the technology, and if
(04:09):
you want to learn more about the history of touch screens,
I urge you to seek it out. It was not
the only source I used when getting all the history stuff,
but it was a great resource. Also. I normally break
up the history and the description of how technology works
into different parts of episodes typically, but in this case,
I think it works better to describe how each version
(04:32):
of the technology works as we get to them, and
as a peak behind the curtain. I came to that
decision after I was about a third of the way
done typing out all of my notes, so I actually
went back and revised my notes quite a bit and
rearranged things because I did not like the way the
episode flowed in its original form. So all that being said,
(04:53):
where did the idea for the touch screen interface come
from well before there were touch screens that could interpret
the touch of a finger or stylus, there was the
light pen. The first light pen was part of a
system that IBM design called Whirlwind, which the company built
for Norad. But the way touch screens work is different
(05:13):
from the way light pens work. With touch screens, the
technology for detecting a point of contact is generally built
into or behind or sometimes in front of the screen itself.
With a light pen, the detector is actually in the
pen side of the interface, not on the screen side,
so the screen is a nert. It's the pen that's active.
(05:35):
Light pens have a photoelectric cell built into them, in
other words, a sensor that detects light, and typically light
pen is tethered to the computer system it's connected to.
It's actually physically connected with a cable. Holding a light
pen up to a screen would allow the light pen
to register when the monitors electron beams scanned across that point,
(05:58):
because monitors in those days were based off the old
cathode ray tube technology, which uses an electron gun that
shoots a beam of electrons in row after row after row,
so it goes it scans across the screen and then
down the screen. So it goes one line across, then
moves down the line, goes across, moves down the line, etcetera, etcetera.
(06:21):
And these electrons then hit against phost four points on
the back side of the screen and it generates light. Now,
because the light pins were tethered to the computer system,
the computer would pick up precisely where on the screen
the light pin was sitting, and it did this by
cross referencing the time of contact with the position of
(06:42):
the electron beam at that moment. So the light pin
would detect this electron beam, and that message would be
sent to the computer, and the computer knew where the
electron beam was at that precise instant, and that way
it knew where the point of contact was. Now, this
is largely an outlier of the touchscreen topic, but it's
kind of a predecessor, so I thought it would include it. Now,
(07:04):
unlike a lot of other technologies, which tend to get
pretty muddy when you start asking questions like where did
this idea come from? We can be reasonably certain that
Eric Arthur Johnson or E. A. Johnson proposed the first
technological solution to creating a touch screen computer interface. Johnson
was an engineer at what was at that point the
(07:27):
Royal Radar Establishment, which was a research facility in Malvern, England,
and as the name suggests, this facility was chiefly focused
on developing new radar technologies, and it wasn't It was
operating as a research organization that worked closely with British
Armed Forces. It would later become the Defense Evaluation and
Research Agency after merging with a few other organizations, and
(07:51):
later still it would become part of a defense contractor
in England called Kinetic spelled with a que. Johnson was
working on improving the user interface for air traffic control staff.
He wrote an article and had the title touch Display
a novel input output device for computers. It was published
(08:12):
in the journal Electronics Letters on October uh in nineteen.
I said on October, I should say in October nine,
because I don't have the precise date of when in
October it came out. But in nineteen sixty seven he
published a follow up piece titled touch Displays a Programmed
Man Machine Interface that further developed this concept and fleshed
(08:34):
it out, and he was describing what would become the
capacitive touch screen interface. And I also find this interesting
because for many years, it seemed the majority of consumer
products that had touch screens used an alternative to the
capacity of approach, known as resistive touch screens, and those
two technologies make up the majority of the touch screens
(08:57):
we tend to encounter. I'll explain the differences between them
when we get to each, but first before I get
into the differences, what our capacity of touch screens and
how do they register touch? Well, they only register a
touch if the substance touching the screen can hold an
(09:17):
electrical charge, So stuff like our skin. Our skin can
hold an electrical charge. It's electrically conductive. So if you've
ever used a touch screen while wearing gloves and nothing happened,
it's probably because the capacity of touch screen was not
able to detect any sort of electrical connection. The gloves
were acting like insulators. They inhibit electrical charge. That's why
(09:42):
there are companies out there that sell gloves that have
conductive wire or conductive pads at the fingertips. And it's
also why you can't use something like an inert plastic
stylus on a capacity of screen. You can use a
stylus that has a conductive surface at the hip, that
would work. But if it's just a plastic stick, for example,
(10:05):
it wouldn't activate the screen. But you could use something
else like um, you know, like a hot dog, which
is I understand it used to be a thing in
South Korea. People would use hot dogs to activate their
capacitive touch screens when it was too cold for them
to not wear gloves, which tells me you could probably
make a killing in soul by selling screen cleaners dedicated
(10:27):
to eradicating weener grease from your screens. But how do
the capacity of screens actually detect touch. There's a couple
of different approaches, but the general idea is to create
a surface that holds an electrical charge, and in many
implementations this is done with a grid of very fine
conductive wires running in rows and columns on an x
(10:51):
y grid. In other words, so like a net, sensors
pick up changes in this electrical charge when they happen.
So if an object act that conducts electricity makes contact
with the screen, so for example, your finger, there's a
change in that electrical charge. Technically the change is a
drop in voltage. So by detecting where that change happens
(11:13):
along those x y coordinates, A microprocessor can interpret the
touch and associate it with whatever command you wanted to execute. So,
in the example of activating an icon on a screen,
the icon represents the execution of a particular app or program.
When the microprocessor or touch sensor detects a contact at
(11:35):
the location of such an icon, it interprets that touch
to mean execute the program associated with the image at
these coordinates on the display, and then it will launch
the app. A similar thing happens with gesture controls like
swiping or pinching. Engineers and programmers have to build in
this capability so that the system can interpret the meaning
(11:58):
behind the gestures and thus produced the appropriate result, But
the actual detection of a touch all comes down to
that change and voltage. Early capacity of screens could really
only detect one point of contact, so if you tried
touching the screen with more than one finger, typically the
screen would only register that first touch. This is because
(12:21):
the sensing technology was limited. Early sensors would estimate the
point of contact. It wasn't incredibly precise. It was precise
enough for general use, but you couldn't really get fine
tuning with it. Later implementations would incorporate better sensors, and
eventually you'd find capacity of screens in which each row
(12:43):
or column of wires had its own associated sensors, which
increased their accuracy, and it opened up the possibility of
a capacity of screen with multi touch capability. Johnson would
receive a U S patent for his invention in nineteen
sixty nine. So if you'd like an engineer's explanation of
the basic technology behind these capacity of touch screens, you
(13:05):
two can search for u S Patent three million, four
hundred eighty two thousand, two hundred forty one. The patent
includes circuit diagrams and a flow chart that are helpful
to understand how it works as well. The capacity of
screen was a great innovation, but it saw little adoption
over the following few years. An alternative approach would get
(13:28):
a bit more traction. In the short term. You could
say the tech world couldn't resist it. I'll explain more
after this break in nineteen seventy, Dr George Samuel Hurst
invented the first resistive touch screen. In the late nineteen
(13:51):
forties through the nineteen fifties, he had worked at the
oak Ridge National Laboratory and R and D facility that
is funded by the U. S Department of Energy. Herst
has been working in the field of atomic physics, developing
stuff like radiation detectors at nuclear testing sites. In nineteen
sixty six, he accepted a job as professor of physics
(14:12):
at the University of Kentucky. While in that role, he
continued doing research into atomic physics, but his team was
running into some obstacles, particularly in the use of a
vander graphic accelerator, which is an electro static generator that
can be used as a particle accelerator. To quote the Minerals,
Metals and Materials Society, which has a PDF about these devices, quote,
(14:37):
a high potential difference is built up and maintained on
a smooth conducting surface by the continuous transfer of positive
static charges from a moving belt to the surface. When
used as a particle accelerator, and ion source is located
inside the high voltage terminal. Ions are accelerated from the
source to the target by the electric voltage between the
(15:00):
high voltage supply and ground. Now that sounds complicated, but
it's essentially a vandergraph generator and you've probably seen one
of these, maybe non in person, but probably at least
in a picture or video. They typically look like silver
orbs that can give off a spectacular spark when they operate.
(15:21):
There's typically a large silver orb on a pedestal, and
then there's a smaller silver orb that's located a certain
distance away from the large one, and when you turn
it on, inside that pedestal, there is a belt that's
running in a loop, and the belt is essentially building
up a positive charge inside that large silver orb, and
(15:44):
when the the voltage difference is enough between the large
silver orb and the small silver orb, it will create
a spark between the two. It can be really really spectacular.
Now I'll have to do a full episode on those
in the future. Let's get back to touch screens. So
Hearst was working with others on his team to come
up with what they called an electrically conductive paper in
(16:06):
order to work with these vander Graph accelerators and to
make their notes more um efficient. The paper would be
able to pinpoint contact, and when mapped to an x
Y coordinate system, could be used to specify a particular
location of contact. So Herst thought, wait a minute, this
(16:26):
could be used as an interface for computers, not just
for registering a specific point in space for the purposes
of research. So Heirst returned to work for the oak
Ridge National Laboratory in ninety and he refined his idea.
He worked with nine other Eggheads to create the first
resistive touch screen. Okay, so, a resistive touch screen has
(16:48):
a couple of layers, one of which is conductive and
the other of which is resistive, meaning it resists the
flow of electrons through the material, and separating those two
layers are small spacers. Spacers are essentially little blocks of
non conductive material. They act as as support structure. They
keep the two layers from being in contact with one another,
(17:10):
and there's also usually a scratch resistant layer on top
of the surface that faces the user. Because using a
resistive screen touch screen means that you're actually having to
apply pressure on the touch screen. You're not just touching it,
you're actually pressing it. So when you press a resistive screen,
you apply that little bit of pressure. The conductive and
(17:31):
resistive layers move closer together, they're flexible, and eventually they
touch each other. Now both layers have an electrical current
running through them, and when they make contact, the electrical
field changes, and sensors and a microprocessor detect and analyze
that point of contact and register it so that the
device does whatever it is you wanted to do, from
(17:54):
allowing you to make a digital signature to executing a command. Now,
unlike a capac poet of screen, resistive screens don't require
the point of contact to come from an electrically conductive material.
A resistive screen doesn't care if the thing touching the
screen is your finger, or it's a hot dog wiener,
or a plastic stylus or a rock or whatever it
(18:18):
is that's applying the pressure. All the electrical activity is
contained within those layers that make up the outer part
of the screen. So you can operate a device with
a resistive touch screen even if you're wearing non conductive gloves.
And if the screen were to get a little wet,
you never want your electronics to really get wet. But
(18:38):
let's say a little water gets on it, it wouldn't
affect the performance. That's different from a capacitive screen. If
you've ever had a smartphone get a little bit of
water on it. Let's say it's a very light rain
and you're trying to use your smartphone, you might have
encountered some problems with it. Well. Capacitive touch screens don't
work so well when they get a little water on them.
It messes with this ability to detect the actual point
(19:01):
of contact with a conductive surface. So resistive screens don't
have that issue, although you still shouldn't really operate them
in the rain. Now that being said, resistive touch screens
tend to be harder to read and to see what's
on the screen. They require more layers than a capacity
of screen does, and they tend to reflect a lot
(19:21):
more ambient light than capacitive screens. Plus, while they are
pretty hardy, they do rely on detecting pressure, and depending
on how hard people are pushing while they're trying to
use these things, it can cause some wear and tear
on the device. If the spacers that separate those two
screens get damaged, then the screen can end up having
(19:42):
points of contact before you've even touched it, which sends
erroneous signals to the microprocess or it doesn't actually know
where you're trying to touch it because it's getting conflicting information. Also,
they were limited to detecting a single point of contact
which eliminated the possibility for a multi touch Still, because
they could stand up to a lot of punishment and
(20:03):
they could work in different environments, they found a lot
of applications in different technologies, particularly in stuff like military tech. Now,
just one year after Hurst's resistive touch screen approach made
the news, the University of Illinois introduced the PLATO for system.
PLATO stands for Programmed Logic for Automatic Teaching Operations. One
(20:26):
of the components of this system was an orange plasma
display with touch screen capability, But unlike the previous inventions,
this approach relied upon an infrared touch panel. All right, well, then,
how does an infrared touch panel work. Well, imagine that
you have an array of l e ED s that
(20:47):
emit light in the infrared spectrum, so they're like tiny
little infrared flashlights. Infrared is outside the visible spectrum of light.
So to us it would seem as if an LED
light was off because we can't see that light. But
in fact, it would be beaming this infrared light across
the surface of a screen, and on the other side
(21:07):
of the beam would be a photo cell. So, in
other words, a light sensor, and it's a sensor at
tuned specifically to detecting that frequency of infrared light that
it was paired with. So if you could see these lights,
it would look like a laser grid, kind of like
admission impossible or Tom Cruise is coming down from the ceiling.
(21:29):
It's pretty awesome scene. But that's what infrared touch screen
would look like if you could see those infrared beams. Now,
if the beam remains unbroken, then it's clear there's no
contact with the screen. If the sensors keep on picking
up the light, they just say, all right, nothing's touching.
But if something that blocks the light that's between the
(21:50):
l e ED that's emitting the light and the photo cell,
that interruption would indicate that something has touched the screen.
And by arranging the l e d s and the
photo cells and having them paired up in columns and
in rows as a grid system, then you have a
whole net of those invisible beams. Touching a point on
the screen would interrupt both horizontal and vertical beams along
(22:13):
the surface, So a microprocessor could detect which photo cells
had registered the interruption and then plot the point where
that happened. So it's very similar to plotting a point
on a grid in math class. Like the resist of screens,
this approach had the benefit of working with any light
blocking material. It did not have to be electrically conductive,
(22:36):
so a plastic stylus works just as well as a
finger if the technology is implemented properly. Also, there's no
need to work in thin metallic wires in the screen itself,
because the screen is not what's detecting the touch. It's
this laser grid essentially not really lasers, but this led
grid with the photo cells, so the screen wouldn't have
(22:58):
any wires in it. It would be brighter and provide
more clarity than early capacity and resist of versions could.
That was a big advantage. The infrared approach would see
use in the nineteen eighties in the HP one fifty.
That was a computer system from well then Hewlett Packard
before they became just HP and it costs the princely
(23:21):
sum of two thousand, seven hundred ninety five dollars in
ninety three, but if we have adjust that for inflation,
that means in today's money it would cost about seven thousand,
two hundred dollars. Yikes. The HP one fifty version, or
portly had some issues that that made it less practical.
So I imagine that, uh, the fact that it wasn't
(23:43):
working perfectly and that the price tag was so high
meant it didn't get a whole lot of traction in
the market. Later on, devices like the Sony E reader
would actually adopt this technology. Now around the same time
that the infrared system debuted in the mid nineteen seventies,
g why Zapp Finel and Chris Herold of the Architecture
(24:03):
Machine Group at M I T. And I know I
butchered their names. I apologize anyway, they created a touch
screen interface that could detect not just touch but also pressure.
I mean it wasn't like a resistive screen in that sense.
It could actually detect how much pressure you were applying
to the screen. In fact, the system could detect up
to eight different signals from a single touch point, including torque,
(24:28):
which meant you could push your finger on the screen
and you add some pressure to it and then twist
your finger, and the interface could detect that you were
making this twisting motion, and you could have that imagined
as some sort of effect in a program. What kind
of effect would depend entirely on the programming, so there's
(24:49):
no specific application, but it could be used for all
sorts of different stuff. So they published their work in
nine with the title one point Touch Input of Vector
Information from Computer Displays, and it was published in the
Computer Graphics Periodical, So if you want to read up
on that, you can. There's also illustrations of how it worked.
(25:10):
There's also a YouTube video of a demonstration as a
very early demonstration of this nineteen seventies era technology. But
it's pretty fascinating to see at work. And we're not
done with the different ways to achieve touch screen functionality.
There's still some more to chat about. Engineer Nimish Meta
developed a solution for the first multi touch system in
(25:32):
nineteen two decades before the iPhone. Now it's important to
note that this was not so much a touch screen
as it was a control interface like a touch pad,
not a touch screen, so think like a keyboard and
mouse or something along those lines. And it consisted of
a pane of glass with a translucent layer of plastic
(25:54):
on that glass, giving it sort of a frosted appearance.
There was a camera mounted below or behind the glass pane,
and that would detect points of contact. Essentially, it was
looking for dark spots to appear on that surface. That
would indicate that a finger was there blocking the light.
This isn't that different from what Microsoft would use on
(26:15):
the surface tables a couple of decades later. I'll chat
about that in a second. So in this case, the
camera would detect these dark spots and through software, the
system would be able to interpret where those points of
contact were in relation to what was being displayed on
a screen. One benefit of this approach was that you
weren't actually making contact with the same surface you were
(26:38):
looking at, so you weren't smearing your grubby hands all
over the same surface you were trying to read. But
then again, you could argue the whole purpose of creating
a touch screen interface is to remove the barrier between
humans and the machines they're working on and make the
experience more intuitive. We have to learn how to use
things like a computer mouse or a keyboard. It's not
(27:01):
hard to learn, but it does mean manipulating something along
one surface while looking at another. So, for example, with
a traditional computer, you would use a keyboard and a
mouse on a plane that's at a ninety degree angle
with the display you're looking at right. So if you
think of horizontal and vertical. Your hands are manipulating objects
on a horizontal plane while you're looking at the reactions
(27:24):
on a vertical plane. So manipulating the mouse to move
a cursor requires your brain to translate the motion along
one axis of movement to a display that's on a
different axis. Now, once we learn how to do this,
and admittedly it does not take very long, it becomes
second nature, so it's not a big deal. But a
touch screen removes that necessity entirely because the thing you're
(27:46):
looking at and the thing you're manipulating is the same surface.
Ine Myron Krueger introduced another input method that wasn't strictly
a touch screen, but is similar enough to merit in collusion.
In this episode, Krueger's system could track a user's hands.
It was a vision based system, meaning it employed cameras
(28:08):
to detect and track hand motion and poses, so I
could detect multiple hands so impaired with the proper software,
it could translate the actions of those multiple hands into
commands for a program. But in this case, the system
would respond to what is called dwell time. Hand gestures
or poses would correspond to specific commands, and a user
(28:28):
would have to hold his or her hands within view
of this system long enough for it to register that
it was in fact a signal to do something. Well,
this wasn't directly related to touch screen technology, it is
important in the history of gestural interaction, which does intertwined
with touch screen technology quite a bit. A lot of
(28:50):
our interactions with touch screens depend upon not just points
of contact, but specific gestures swiping, pinching, that kind of thing,
and I figured should at least touch on Krueger's work.
Krueger wrote several books on technology that are pretty fascinating.
I feel I should probably dedicate a full episode to
him at some point, and we're not finished yet, so
(29:11):
when we come back, I'll talk more about multi touch
systems that actually did rely on making contact with a screen.
But first let's take another quick break. So the first
screen to feature multi touch isn't the surface. It's not
(29:32):
the iPhone, though you wouldn't necessarily know that based upon
the marketing around those devices. A Bell Labs researcher named
Bob Boy created the first multi touch capacitive screen in Actually,
to be fair, it really was an array of capacitive
touch sensors that were mounted on a transparent film that
(29:55):
could be added as an overlay on top of a
CRT monitor, So monitor itself wouldn't have the touch screen
built into it. It would actually be a peripheral you
could add to the monitor. This prototype never emerged out
of the lab for broader application, but it was able
to register the touch of multiple points of contact, and
thus you could create applications that would allow for that
(30:19):
and to you know, create new commands for how this
could work. And there are a couple of other approaches
to multi touch. One demonstrated by Jeff Hahn in two
thousand and six, used a rear projection system, a sheet
of acrylic and an LED that created frustrated total internal
reflection or f t i R, which sounds to me
(30:41):
like meditating a self discovery only to find out you're
actually a total jerk. But that's not what it actually means.
So to get into the nitty gritty of the technology
is more than a little bit complicated, but I'll give
it a shot from a very high level. So imagine
you have a sheet of really clear material like acrylic. Okay,
(31:04):
so you've got a sheet of acrylic. Now imagine that
we're looking at this sheet of acrylic from a side
edge right, so the top surface is uh is and
the bottom surface are facing you know, up and down.
From our perspective, we're just looking at it from the side,
and you've got a bunch of infrared LEDs mounted on
either end of this acrylic sheet. Below the acrylic sheet
(31:29):
and facing upwards is an infrared camera. So the control
surface in this case would be above the sheet from
your perspective. The camera is below the sheet from your perspective.
So total internal reflection gives you a hint at what's
actually at play here. Those L E D s are
beaming light, infrared light into the edge of the acrylic
(31:53):
at a specific angle it's called the critical angle, which
results in the beam reflecting perfectly within the acrylics. So
imagine that on one side you have the beam UH
position in such a way that it's angled downward from
your perspective. The beam goes down, hits the inner bottom
(32:13):
edge of the acrylic, bounces up with no refraction. It's
it's a perfect reflection, and then encounters the upper edge
from your perspective, bounces off that again perfectly reflected, and
does so all through the entire length of this acrylic sheet. Now,
if you could see the beams of infrared light, you
would see how they were criss crossing around inside this
(32:35):
acrylic bouncing off those inner surfaces of either face of
the sheet. And it happens because physics. I mean, it
gets more complicated than that. But if I were to
jump into it, I would have to talk about Snell's
law and the refractive index, and honestly, it would get
super complex and it would be hard to describe without
the use of visual aids. So I'm just gonna take
(32:57):
a shortcut in this case and just say it works
because of physics. Anyway. The result is you have these
perfectly reflective beams of infrared light bouncing around inside this
acrylic sheet. But if you were to touch the surface
of the sheet on the active side, the top side,
you frustrate this total internal reflection. Some of that light
(33:20):
that was being reflected inside the acrylic sheet at the
point of contact can pass from the surface to your finger,
so reflection is no longer total. And the infrared count
camera that's mounted beneath the sheets, pointed up at it
will detect that change in the reflectivity at the point
of contact, registering it as a touch, and this system
(33:43):
can detect multiple points of contact on the same surface,
so it is a multi touch approach. In two thousand seven,
Microsoft showed off a table sized computer system that it
called the Surface. Since then, Microsoft has used the name
Surface for some of its other product us, largely the
tablet style computers. But the early version of the Surface
(34:06):
was much much larger, and it was a collaborative workspace
where multiple people could stand around this interactive table, the
top of which was a computer display, and it could
detect multiple points of contact on that computer display. You
could manipulate virtual objects, you could play games, you could
do a lot of different stuff. The Surface worked using
(34:28):
some of the methods I've already mentioned in this episode.
Inside the table was a projector, and the projector was
projecting the images that you would see on the actual surface,
So what you were looking at was really a projection
being shot against the backside of the screen you were
looking at. So, in other words, the Surfaces screen was
(34:49):
what we would call a rear projection screen, very much
like rear projection televisions. Also inside the Surface were cameras
that could detect the points of contact on the opp
outside of the screen on your side. In other words,
Microsoft also designed a program that could recognize patterns that
were printed on special stickers. Then they could put those
(35:09):
stickers onto solid objects. So if you place one of
those small objects on the surface, the camera underneath would
be able to recognize the pattern on that sticker and
then execute an associated command, which could be anything. But
one version of this one version I saw was that
you could have sort of a synthesizer application, one that
(35:32):
could play pre rendered styles of music, and each sticker
would represent maybe a specific tone or a sound pattern
or a sound effect, and by arranging a series of
these objects on the surface, you could build a sound
So you could create a series of sounds, like in
(35:53):
a particular rhythm. By manipulating these objects and changing the
location on the surface might do things like change the
pitch or the volume of each sound. So you would
have this interactive kind of music surface to work with.
And that's just one example of what you could do
with this type of technology. There were lots of potential applications.
(36:15):
Microsoft would actually bring that version of the Surface to
c E. S. Two thousand eight, but the company was
also quick to say that the technology wasn't actually consumer tech. Rather,
this was technology that businesses would be able to purchase
for their own purposes. So you might have it in
a retail establishment, you might have it in an entertainment establishment.
(36:38):
One of the versions I heard about was being used
in a Las Vegas bar where you could play games
on the table, or you could use your table to
send messages to other people around the bar on their tables,
which kind of skeeths me out a little bit. But
then again, I'm not a bar person, so maybe I'm
just the wrong kind of guy to look into that
(36:58):
sort of thing. It just seems like another way to
kind of harass people without them, you know, wanting it.
Who am I to say? The same year that Microsoft
first demonstrated the surface, Apple introduced the iPhone, and again,
while Apple didn't invent capacitive touch or even capacity of
multi touch, heck, even the gestures associated with gestural interaction
(37:20):
on the iPhone were already described by other people in
other systems, but the packaging of all of those features
in a sleek smartphone form factor wowed the crowds. The
iPhone brought touchscreen technology into the spotlight for lots of people,
when earlier it had been a type of user interface
that really only applied to electronics in niche applications and implementations.
(37:45):
The iPhone was not the first consumer gadget to rely
on touch screen interactions, but I think it's safe to
say that Apple got it so right that it changed
the game for everyone, and it became the go to
interface for mobile handheld electronics. Other than the methods I've
already covered, there are a couple of more rare forms
(38:07):
of touch screens out there. One is the surface acoustic
wave touch screen. Now, as that name implies, this version
of a touch screen relies on sound, specifically sounds that
are in the ultrasonic frequencies. Those are at such a
high frequency range that they are imperceptible to us, we
cannot hear them. Ultrasonic speakers would be along the edge
(38:30):
of the screen that would emit these high pitched sound waves,
and those sound waves reflect back and forth across the surface,
kind of like waves go across the water, and when
something would come into contact with the screen, it would
disrupt the path of those waves. And again it would
be a lot like if something large were to get
into a a wavy pool of water. Now, with water,
(38:55):
the waves are are really big, particularly compared to ultrasonic frequencies,
and that kind of makes it a little hard to
see what the effects are in this interrupted path system.
But it's much easier to see the change with ultrasonic
waves because they are so tiny, and sensors detect the
point of interruption to determine where you touched the screen. So,
(39:15):
in other words, they detect where are the waves no
longer able to travel unimpeded, and that is clearly the
point of contact. They're also touch screens called near field
imaging touch screens. These screens have technology that monitors the
electromagnetic field on the glass screen, and when something comes
(39:35):
close to the screens surface, it interferes with that electromagnetic field,
and the system detects that and interprets it as a touch.
These sorts of screens can also be pretty rugged, and
so they are frequently used for stuff like industrial or
military applications. And there we are. That's the history and
operation of touch screens. Uh. It's pretty complicated because, like
(39:58):
many other technologies, there were a lot of people taking
many different approaches, all in an effort to achieve similar goals,
and some of what I've described has also been used
in other types of interfaces that don't involve a touch
screen at all, such as the gesture controls used in
systems like the Microsoft Connect peripheral for Xbox systems. Now,
(40:18):
I think it's safe to say that the Connect was
largely a failed experiment. But I don't think it was
because it didn't work, because for the most part it
did work. Now, there were some rather egregious exceptions to
that rule, but for the most part it worked. Rather,
I think it failed because the system never evolved beyond
a gimmick or oddity in the eyes of most owners. Uh.
(40:41):
You could argue a large reason for that was just
a lack of very compelling content in the library of
games and applications that supported Connect interactivity. Still, the Connect
relied on a lot of work that was being done
in the field of touch screen user interfaces and just
your controls, So while it's not a direct application of
(41:02):
the technology, it is definitely related to it. Likewise, there
have been several systems for everything from virtual environments to
art installations that have used similar technologies to some touchscreen implementations.
Most of these have been visually or optically based, so
in other words, they use cameras to track the gestures, motions,
(41:23):
and poses of people within a physical space in order
to create some sort of effect. You may have been
one of these installations or or applications where your movements
through the space are reflected in some way. Maybe it's
a video effect, maybe it's sound. But a lot of
that also has related technologies to the ones that went
(41:45):
into developing touch screens. Now, considering the ubiquity of mobile devices,
I expect will continue to see advancements in touch screen
technology over the years. It may involve new approaches to
achieving the results, or it may involve refined implementation of
existing approaches to improve the overall experience. And I'm not
sure if it will translate to all our electronics. I
(42:08):
think there are some implementations where touch screens make a
lot of sense, and in others maybe not so much.
Like I'm still curious if people with desktop or laptop
displays that include touch sensitivity really use that feature all
that often. I mean, maybe they do. I'm only basing
this off my own anecdotal evidence, which obviously is limited
(42:29):
and therefore largely worthless. But it's hard for me to
imagine using a touch screen laptop or our desktop display regularly.
In fact, when i use my Microsoft Surface tablet as
a laptop, because I've got all the connected keyboard I
can use with it. When I'm using it, like in
that form factor, I totally forget that the screen actually
(42:50):
has touch capability, that I could reach out and touch
things on the screen instead of using the mouse pad. Uh.
But also I have to admit, as Tori will tell
you without a moment's hesitation, I'm old, and so it's
entirely possible that I'm the odd man out here. I
do think it's true that even when an interface works,
(43:11):
and it works well, it's not necessarily the best interface
for everything. So if I'm not the odd one out
and most people find touch screens unnecessary for laptops or desktops,
maybe we won't seem as many of those types of
devices with that feature included in the future. Kind of
like how televisions for a while all had three D capability,
(43:35):
and then people said, I don't want three D. I
don't care for it. It's too irritating. And now if
it is a feature, it's rarely one of the main
ones mentioned on the box. For those televisions we might
see the same thing with the touch screen text for
certain types of electronics, but for things like mobile devices,
it totally makes sense, and I expect we will continue
(43:57):
to see uh it used there and improve in that implementation.
And that wraps up this discussion about touch screens. Obviously
I could have gone into a lot more detail about
each of those, but that would have required a whole
mini series on it, and honestly, I'm not sure that
that I could do all of that without losing my mind.
(44:17):
So we're gonna wrap up this episode. If you guys
have suggestions for future episodes, you can write me the
email addresses tech Stuff at how stuff works dot com.
You can drop by the website that's tech stuff podcast
dot com. There you're gonna find an archive of all
of our shows, including the two thousand nine episode where
I first talked about touch screens with my co host
(44:38):
Chris Palette, as well every other episode of tech Stuff.
You'll also find links to our social media presence and
a link to our online store, where every purchase you
make goes to help the show, and we greatly appreciate it,
and I will talk to you again really soon. Text
(44:59):
Stuff is a action of i heart Radio's How Stuff Works.
For more podcasts from I heart Radio, visit the i
heart Radio app, Apple Podcasts, or wherever you listen to
your favorite shows.