All Episodes

August 7, 2013 41 mins

Is what we see in the movies possible, can you really zoom and enhance? What are some of the ways we manipulate digital photos and video? What is light field capture and what can it do?

Learn more about your ad-choices at https://www.iheartpodcastnetwork.com

See omnystudio.com/listener for privacy information.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:00):
Brought to you by Toyota. Let's go places. Welcome to
Forward Thinking Either and Welcome to Forward Thinking, the podcast
that looks at the future and says zoom manned enhance.
I'm Jonathan Strickland, I'm Lauren Malcolm, and I'm Joe McCormick,

(00:23):
and we wanted to talk about this idea of playing
with cameras and the images that they take, whether it's
still photography or video images, and this idea of being
able to manipulate those images and maybe even do this
crazy zoom and enhanced thing. You guys, You guys are
familiar with the TV, right, I need some help you guys,
what why is that there's been a theft? What was stolen? Well,

(00:48):
somebody broke into my home and they stole all of
my VHS tapes of the Super Mario Show. Wow, that's
all of them. That's not necessarily a loss, but I
appreciate that you're hurting Lauren. No, this is a loss
not just for Joe over all mankind. It was the
Super Mario Brothers Super Show and now I don't have it,

(01:10):
but I've got a lead. Okay, So I had a
digital security camera installed in my home, not too, as
long as you have a VHS collection of Super Mario.
But you have a digital security camera digital security camera,
and it it records video in still frames of Oh,

(01:32):
I guess they're about a hundred pixels each, so very
low resolution. That's a hundred pixels, I mean. Well, anyway,
so I got together puzzles that had a hundred pieces,
so it's not that much. And it takes a picture
every sixty seconds, so that's not bad. I don't call
that video. I call that a series of very crappy

(01:54):
still photos. But I have a photo of the thief, okay,
all right, and I'm going to try to find him.
It could also be sasquatch. We don't know you, but
I need your help, okay, to identify this person. Right.
So do we I know here that we got a
lot of techie people here in the office. Do we
have one of those enhanced computers? I can just scan
this in, right, and and we can press enhance and

(02:16):
it will give us the dudes face. No, we don't
have one of those. Why don't we have They've got
to be pretty cheap by now, right, No, So alright,
so what what we're referring to? Here's this trope. I
first of all, Joe, I certainly hope that that was
just an example story that you haven't actually suffered a
terrible loss of that. Just kidding, all right, good, So

(02:37):
so the Super Mario videotapes are safe. Okay, good. So
what we're talking about is a television trope. In fact,
one of my favorite websites to just waste time on
is one of the TV tropes sites where you can
just read about all these these sort of cliches that
have been used in television and film for about as
long as the those types of media have existed. Really um,

(03:00):
and some of them are a little newer than others.
Zoom and enhanced, pretty new because we didn't have those
kind of we didn't even think about those kind of
capabilities until maybe the last couple of decades. Okay, so
it may have been earlier than this. But the earliest
example I can think of is in N two the
movie Blade Runner Ridley Scott Harrison Ford um based on

(03:22):
the Philip K. Dick. But there's a scene in it
where Harrison Ford's character is doing some investigating. Yeah, he's
got a photograph and he's got it on his computer
screen and he keeps shuffling around, looking at different frames
of the photograph and then zooming way in and enhancing
the photo. Yeah, it's um, it's it's the way TV

(03:46):
tropes defines it. They call it the enhanced button, very
similar to the scenario you were talking about the very
top of the show, Joe, He says, a staple of
any crime drama. The enhanced button on the computer is
able to turn a tiny, blurred, grainy image in a
photo or video into a clear, unmistakable piece of evidence.
This process is virtually instantaneous, unless added dramatic tension is required,

(04:08):
in which case extra technobabble or more applied flabottanum may
be needed. These, by the way, or other tropes may
require someone to stand next to the computer intoning enhance,
enhance for full effect. So yeah, this idea of taking
something like an image a photograph that is imperfect, it

(04:29):
has a limited amount of data in it, and then
enhancing it so that it becomes something useful, something recognizable,
is that a thing? Is that possible? All right? At
least the way it's done in Hollywood? Not so much. Well,
it's not possible the way they do it because in
these movies and TV shows. You're just getting information that

(04:50):
just blatantly just was not there before, right, Right, you
can only enhance up to as much information as actually
exists in the physical record of that infra exactly. You
can't extrapolate new things whole cloth. Right The way, the
way I'd put it is that there's sort of a
bottleneck on data at the moment an image is captured.

(05:11):
Um data comes into the camera and you record a
certain amount of it, and then data comes out later. Now,
you can manipulate that later data in all kinds of ways,
but you can't ever put in more than came in
through the lens to begin with, well, not not directly
onto the raw image file. You can manipulate it enough

(05:31):
and through guesswork. Really you're you're kind of you're kind
of making some assumptions where you can fill in information
that is missing. But it's not like you are uncovering
the information that was there. You're actually creating new stuff
to go in with the stuff that was captured at
that moment. Right. It would be like if you saw,
for example, somebody's shadow in a photograph. Right now, if

(05:54):
somebody was a really good I don't know, this is
a real thing, but it was really good at identifying humans.
But the shadows they cast, they can do the same
kind of thing. They can look at the shadow and
the picture and say, oh, it's probably this person, But
you still wouldn't actually have that person's image somewhere hidden
in the file right. And you know, also, like if

(06:15):
you've got a picture of the back of a person's head,
it's not like the image of their face is contained
in that data somewhere, and you can't just just rotate
person and and have you know, there's no button to
do that. It's amazing how often you can do that
in science fiction though, like turn it around. I want
to see this. I want to see this still image
from the opposite angle, as if we can magically place

(06:36):
the camera anywhere after the effect. Uh, what what you
can do in a virtual environment? I mean, there are
camera tricks you can do in a virtual environment that
completely defy all laws of physics because that they aren't
a problem in a virtual environment. So, for example, in
a video game, there are a lot of video games
where you can capture the footage of you playing while
you're playing the video game, and then watch it later

(06:57):
and you can even watch it from various camera angle
is depending upon the type of game. Like, some games
give you essentially free reign. You can place the camera
anywhere you like. You can have a free roaming camera
and move it dynamically as the scene plays back. So
even though while you were playing you had one set
of perceptions, you know, you might have been able to
move the camera around then too, but you were limited

(07:18):
at that time by whatever was happening right then and there.
But in playback you may be given unlimited freedom. Now
that's just not the case with real life. I mean,
obviously you know it's that that that data exists in
real life, but if there's nothing there to record it
right then, So when I was talking about the you know,
creating stuff so that you can fill in the gaps,

(07:40):
it's where you're you're not again, you're not uncovering lost information.
You're not or or obvious skated information. That's the way
it comes across in these television and film examples where
the answer is hidden in the file and you just
have to dig it out. Yeah. Yeah, there's just a
button on your computer that somehow makes it go from
blurry to not blurry, and that it was the info

(08:02):
was always there, it just needed to be not blurry.
That's not the case. That's not the way it works. Um.
There was a great article and Wired that talked about
this and talked about some approaches that that various technicians
have made to really uh address this issue. You know,
I was talking about creating data to fill in those gaps,

(08:24):
the stuff that's not actually there in these photos. You know,
in the Hollywood and TV versions, it always seems like
the information is there, it's just blurry or whatever. And
when you hit this button, it removes whatever that problem
was and you get to look at the information that
was always there. That's not the case, right. We're talking
about photos and video where stuff is missing. But you
can start to fill in some of those gaps by

(08:46):
creating stuff, by making guesses. And there was a great
article and Wired that talked about this and talked about
compressed sensing and sparsity. These are concepts that are used
by technicians to kind of fill in information that might
be missing, either a file. You know, maybe maybe the
image is just really fuzzy. It could be an old photograph,
or it could just have been made with a poor camera,

(09:06):
or maybe something was on the lens or whatever, and
we can use these techniques to try and fill in information. Now,
in the Wired article, they had a really interesting analogy.
They said that, imagine that you have a book where
on one page of the book you have almost half
or maybe even more of the information missing. So you've

(09:26):
got just missing words in sentences. Now on the pages before,
in the pages after, you've got some information there, But
on the page that you're interested in your missing words.
It would be like trying to extrapolate what those missing
words were just based on the little bits of information
you had on the previous and following pages. It's really
difficult to do. However, with compressed sensing, there was an

(09:49):
interesting development. There was a guy named Emmanuel Condez who
was looking at what an image called the shep logan phantom. Now,
this is actually an image that technicians you us in
order to test imaging algorithms, and it kind of looks
like a an alien with a slightly raised eyebrow, kind
of a snarky alien. Now, what what he did was

(10:11):
he took an older, fuzzy version of the image, not
an older one, but a fuzzy one. So he was
testing an algorithm, trying to see if a particular technique
would allow him to uh to sharpen this image up
a little bit. And the way the technique works is
it looks for the simplest approach to filling out the

(10:32):
information that's missing by look it's samples pixels in the image,
and then from those samples, it starts to create simple
shapes that are their color matched to the various parts
of that sampled image. Now it tries to use the
fewest number of these shapes to fill out this this
photo or this this picture doesn't have to just be

(10:54):
a picture. By the way, you can actually apply the
same technique to other kinds of media, including music, where
you know you might have a low sample rate for
an old MP three and you want to try and
enhance it. It could do the same sort of thing
with that right right, Basically, anything with a with a
wave form is going to operate under very similar principles.
So what they're doing here is the reason why he

(11:15):
was doing this in the first place was not just
to get a sharper picture of an alien. The idea
was to try and enhance m r I images because
the best way to get a very very clear mr
I image is to put someone in an MRI machine
and have them stay perfectly still for a couple of minutes.
But to stay perfectly still for a couple of minutes

(11:35):
usually means having to put them under anesthesia. So they
actually stopped breathing. That's how still they needed people to be. Now,
that's not really that great an option. So he was
looking at what if I took this approach to try
and take an image that was captured in say forty
seconds as opposed to two minutes, and then try to

(11:56):
use this this uh, this simple technique to see if
I can sharpen it up. So he runs the the
image through this algorithm he's created and turned out that
the resulting image was a perfect match two or a
near perfect match to the UH the original version of
the shep Logan phantom image. And he thought, well, that's weird.

(12:19):
That can't be right. There's no way that worked. And
so he tested it again and it got the same
result and ended up showing it off to some other
folks and and they really began to put their heads
together and wrote a white paper on it and a
research paper all about this UH technique, and um, yeah,
it takes about a hundred thousand pixels for example, and
just really focus on those and build out these shapes

(12:42):
and it could build a usable image. But um, a
couple of caveats. One is that it can take a
few hours to do this as the algorithm goes through
all the different variations of the simplest way of approaching this,
And there is a chance that the resulting image that
you get back at the end is not a match

(13:03):
for what it should have been. There is that chance.
It's a small chance according to the researchers, but it
can happen because the computer is just basically guessing based
upon all the little bit of information that it has.
So in that sense, if you have this this area

(13:23):
of doubt where you know, you might say, well, you know,
according to our computer model, this is what the image
would have looked like if we had looked at it
at this scale. Uh, you you know, you have to
keep that in mind. You have to remember, like, this
is what it probably looks like, not this is definitely
the image. So that's another difference from the Hollywood version.

(13:44):
Right with the Hollywood one, as soon as they do
the enhance, there's our guy, there's no way it's there's
our guy. It's the guy who was built third in
the film, so now we know we got him, whereas
this is just some dude. Yeah, So I mean that's
an interesting approach. And the whole idea of sparsity is
this idea of going with the simplest um and fewest

(14:08):
number of simple shapes. So it might be like it
it detects a couple of blue pixels, uh in a
in a or a few blue pixels in an area,
and then it just fills out the rest of that
area with the same color blue, so that it's approaching
at you know, saying well this, this is probably part
of a border for this thing. I'm just going to
fill in the rest. And it does that thousands and

(14:29):
thousands of times for the entire image. But yeah, it's
still not the kind of instantaneous approach we see in
popular films and television so well, on a much smaller
scale and in that kind of instantaneous sort of time frame.
That's that's a function of photoshop. I mean, you know
you can you can click that I forget what the

(14:49):
function is called in there. You can just click that
little button and have it kind of fill out what
a line would have looked like, yeah, yeah, there are
too what's around it. There are a lot of algorithms
out there that take this approach where it looks at
the existing data and tries to extrapolate what the rest
of it should be. Uh, and it's you know there
and I'm talking about a few pixels, right, and they
have different degrees of of uh of sophistication and and

(15:13):
resolution and uh, but nothing on the scale of the
Hollywood version. Sure, sure, I I do think that that
what we see in those kind of everyday applications is
is what leads to some of the confusion about what
like the professionals can do. Um like a like like
with Google Maps. You know, you can you can zoom
and enhance in a Google map, but that's because it's

(15:34):
built of these multiple tiles of images then, right, that
that have greater acuity on the lower levels. It's not
like when you are at you know, at like satellite
view of the Earth that has all the same detail
as it would if you are a low flying plane.
They have different sets of images that are geolocated at

(15:55):
particular points on the Earth, and that's you're you're shifting
from one set of just to another and the real
genius of the program is how it allows you to
to do that shift. Yeah, and it does in such
a way where it kind of makes you feel like
you are having a seamless experience, but in reality you
are switching from one set of photos to another. There's
a similar thing in a way, uh an idea called

(16:18):
gigapan or gigapixel images. Gigapan is just one of the
many terms for it. So this is the idea of
taking several pictures high high, high resolution pictures of a
scene and then stitching all those pictures together to kind
of make sort of a panoramic image, but panoramic in
beyond just you know, it's a very wide photo. It

(16:39):
could be very tall. It could and and the cool
thing is that allows you to zoom in at crazy
levels because instead of it being just one big picture
with lots and lots of information, it's actually a collection
of mosaic of all these high resolution images. So I've
seen some of like sporting events, like like the Olympics
or something like that, and it's a picture of the

(16:59):
crowd and when you first look at it, you just
see a mass of faces. It's just a huge number
of people, maybe a hundred thousand people, and then you
could arbitrarily say, all right, don't want to zoom in
on this one section of the crowd, and you zoom
in until just that one section fills up your screen,
and now you can suddenly see actual details, and then
you say, they kind of want to zoom in on

(17:20):
that collection of of of folks like that that small
group of people right there, and zoom in even further,
And depending upon how many photos they've taken and how
high resolution the photos were, you might get to a
point where you can read the text on a person's shirt,
or at least be able to see what kind of
basic design is on a person's clothing if they are
wearing something that has a big logo on it or something,

(17:41):
you might be able to tell um and it. The
illusion is that you've got this one picture that you
can just zoom in indefinitely, just like you could in
the movies. But the way you produce that file is
actually by taking all these different pictures. It's not like
it is a single element the way you would think
from a film or TV show. There's actually sort of

(18:03):
a whole family of image processing techniques that are known
as super resolution that's the idea of taking a picture
and trying to somehow increase resolution after you've already got
the final product. Um. One of the techniques that I
think is interesting is, Uh, so we've talked about single

(18:25):
frame increases in resolution, but what about multiple frames. So
imagine you've got video and it's not my one image
per second security camera, uh second per minute whatever I said. Um,
it's it's like continuous video. You can actually put together

(18:46):
frames inaggregate to make each frame sharper. Interesting. Actually, that's
the way that the human eye works. That is basically
how we are all seeing things all the time. We
see in I mean, I guess you could call it
still frames, but but basically in video, and uh, and
our brains kind of compile the images as I'm looking

(19:06):
back and forth between the two of you, or um
or or kind of going like what's that weird thing
on the corner behind old head? No, but um, now
you know, and and and your brain puts together this
information into more or less a single image. So so
if you were to take a camera, let's say you've
got a digital camera that could take burst photos like
a whole bunch in just a blink of an eye

(19:28):
that I assume you could apply the same sort of
approach to try and create the best possible version of
the picture you were trying to take. Oh, I'm sure.
I mean what's operating here is that when you have
multiple frames, each frame is probably giving you some type
of information that wasn't available in the frame before. So

(19:50):
if somebody's turning their head or something like that, at
different points, you see different parts of it illuminated, um
some parts are closer to the perfect ideal focus, and
and so by sort of selecting the best part of
each of those images and and averaging them right, you
can get a sharper image than you had in any

(20:10):
of the original frame. This is kind of like that
that those commercials you see of the cameras where you
can swap out people so that you just see the best,
uh best faces for everyone. Like you've got the group photo,
and you took a series of group photos, and you're like, well,
little Billy was being a complete snot in the first

(20:30):
for six of these, but in the seventh one he's
looking he's looking, you know, at the camera and smiling. Unfortunately,
Dad has his eyes half closed because he's just about
the sneeze. So we need to combine all these photos
into the ideal family photo that never existed, but seems
to like that moment never existed, right, the moment where

(20:50):
everyone in the family is smiling and content and behaving
never existed, but you've created the illusion that it has
by combining all these images into one. Or if you're
George Lucas and you really like an actor's face in
one take, but their body movements and another, so you
just paste the two together in episodes one through three. Yeah,
or you know, if you just don't like actors and

(21:10):
tell them not to act. Sorry, that's a little editorializing there.
This this also reminds me of the the app I
talked to you about Lauren A Group GROUPI. Yeah, this
is this is one um that was created by um
I Do Use Labs, which is out in uh Pakistan.
But it's um It's it's an app that let's everyone

(21:33):
be in in a photograph without any one photographer having
to having to step out or having to give your
you know, very expensive device to a random strange may
may not make off with. So let's say let's say
like we get the whole House Stuff Works crew to
go someplace. You know, we're all going to six Flags
for a day, and we want to get our our

(21:53):
picture taken in front of the great American screen machine
as we are wont to do. And uh, and there's
the whole group. But who takes the photo? Do we
and trust our expensive house stuff works camera to some
ragamuffin walking by or bugs bunny? Yeah, he can't be trusted.
He can't doesn't even have opposable thumbs. So yeah, we

(22:15):
we end up saying, well, what can we do? What
if we use group pick? Then essentially, from what I understand,
what allows you to do is take at least two
photos where you swap out photographers, and then you can
combine the two so that you have both photographers in
the full group photo. All Right, the app app helps
you frame a picture and then um you uh, you know,

(22:36):
one person takes the first picture, a second person takes
the second one. Um, you mark out who the two
photographers were on screen, and uh, based on the fact
that it's already framed it for you and so they're
more or less identical photos. Otherwise, um, it will it
will swap out the little slivers well, and you know
it's it helps if you have one each each photographer

(22:57):
on the extreme ends of the frame. Right, this would
have been so useful to like, uh, despotic Soviet rulers,
you know, like Joseph Stalin, But you went straight to
despotic Soviet rulers. Yeah, you know. So, so you've eliminated
some political rival and you want to erase his image

(23:17):
from pictures of you. Um now you cannot just erase him,
but you can also insert your new cronies and sycophants. Right.
I said that what I want to do is use
this kind of thing to take a picture where there's
like twenty Jonathan's all in the same photo. You can
you can certainly do that your dream world, you know,
but my dream, your nightmare, this all this all weirds

(23:41):
me out, honestly. I I mean, it's the technology is
fascinating and wonderful. Um and and this kind of automatically
revisionist history is I'm not sure that whether or not
I need access to this technology for for a nap.
It's it actually does have some somewhat troubling implication. The
idea that you can manipulate images to such an extent

(24:04):
as to create a new history that never really existed.
It's kind of you know, I mean, that's a plot
point and a lot of movies and television as well.
It's just now we're getting to a point where the
average consumer could theoretically do that with very little training,
and and we all do this all the time. I
mean everything that we're seeing again, like the human eye
is flawed. It's only taking in so much information and

(24:25):
it's filling in a lot of gaps in between those
frames that it's taking in. But um, but yeah, just
doing that on purpose. I'm like, Okay, well, I thought
i'd talk a little bit about some other kind of
cool camera tricks. There was one in particular one to
talk about, um, which was this idea of being able
to take photos and then change the focal point after

(24:47):
you've taken the photo. Yeah. The light field cameras. Yeah,
lightfield cameras also known as plan optic cameras, although they're
not true plan optic cameras. A plan optic camera, well,
it comes from the word plan us, which actually means
full or complete, and then optic, of course, is the
behavior of light. A true plan optic camera is impossible.

(25:08):
It's just a theory. It's kind of a thought experiment,
because the reason why it's impossible is that it's the
idea that you would be able to take, uh, all
the visual information within an environment, kind of like in
those virtual environments, and be able to reproduce a still
image from any angle, from any focal point. It's not
really possible, not only because we can't just place a

(25:30):
camera anywhere in the room, but also because the camera
itself is going to reflect light off of it, and
so the camera's presence. It's kind of like that whole idea,
like the by observing something, you change the observed a
sort of camera similar although Heisenberg's uncertainty principle would tell
me that I know where the camera is, but I

(25:50):
don't know how quickly it's taking pictures. Um, that's just
a little quantum joke. But anyway, it's not a true
plan optic camera, but light field camera. What does is
it tries to capture all the rays of light and
every direction that they are traveling within a single frame
of reference, a single image. So the camera that a

(26:11):
lot of people have heard about is the light tro
which is this, uh, this really cool camera. If you
were to just look at one, you would you would
think that looks like some sort of prism or something,
because it's not, you know, it's not camera shaped. It
looks like you know, this this elongated uh cubic kind
of thing, and it actually allows you to take photos

(26:33):
and then change the focal point after you've taken them.
So if you've set up like a scene so that
you've got you know, Joe, let's say that you are
just crazy about war gaming, and you haven't you have
an enormous collection of painted lead miniatures, oh that kind
of war gaming. No, not that you actually are buddying

(26:54):
up to your you know, Russian despotic friends that you've
already referred to in this episode, But no, that you
play a tabletop war gaming games and you've got a
huge collection of these painted lead miniatures and you've you've
set them up into this neat uh row upon row,
a battalion of soldiers and on this table, and you
take a photo from the end of the table. Now,

(27:15):
normally you would have to set a focal point on
your camera, right, you would have to say, all right,
I want to focus on the front soldiers, so that
everything in the background kind of fades away into fuzziness.
As it goes further back, or you would set the
focal point so that the ones in the back are
in focus and the ones in front are kind of blurry. Well,
the litro camera captures the light field, all of those

(27:35):
light rays traveling in every direction, and then creates essentially
a virtual camera with a virtual lens within the software.
And so when you view your image, you can change
the focal point and say, all right, I want to
focus on the soldiers that are in the back, and
it'll switch the focus to the soldiers in the back,
or I want to focus on the ones in the front,
and it will create essentially a virtual camera with a

(27:57):
virtual lens and a virtual image sensor that it have
created that particular image, and you can change as many
times as you want. Uh. And it lives that way,
but only if you're viewing it on a computer. Obviously,
if you were to ever print an image out, it
would be stuck in whatever focal point you had chosen.
That seems really cool, but I wonder how much space

(28:18):
does one of those image files take up? A lot? Yeah,
they does it take a long time to to process that?
Not at all? It's like crazy fast. I mean, does
it take a long time to take the original image.
Not at all. I mean, because you know the kind
of depth of field the ear that you're talking about, Like,
you know, what was so revolutionary about artists like Ansel

(28:39):
Adams was that they were working with such large prints
of film that they could gain a depth of focus
that was huge. Well, it is. There is a limitation
the LTRO camera. It's not like it's not the kind
of camera you're gonna take with you to go um,
you know, like on a fast sightseeing tour. It's it's

(29:00):
great for things that where you have composed a scene
and you want to take a photo of that scene,
or if you wanted to do something like you have
a flower in front of you, and you're holding a
flower in front of you and there's the Eiffel Tower
in the background, you could take a picture like that
and then you could swap the focus so that the
Eiffel towers and focus are the flowers and focus. But

(29:21):
it's not the kind of thing you would just carry
around to take a snap whenever you were walking around.
It's it's not that that kind of cameras. It's not
your no, no, it's not designed for that. It's not
meant for that. UH. And you know I've I've played
with one of these. I actually got a chance to
play with one, and it's kind of cool. The viewfinder
on the back is essentially the entire uh. It's not
even if you find her. It's a screen. It's a

(29:41):
touch screen that is the entire uh interface. So you'd
point the camera at something, you'd see the video image
of it on the little of the screen on the back.
It keeps saying viewfinder, but it's a screen. You tap it,
it would take the photo, uh, and then you could
look at the image on the screen on the back,
and even in you could touch different parts of the

(30:01):
image and bring that part into focus. UH. And then
once you upload the images, they would live on a
website that LTRO owned, and you would be able to
play with them and share them. That way, you could
share it onto other platforms like Facebook or Twitter or whatever.
People could view the images and and they could change

(30:22):
the focal point too. So it's a living image in
that sense. So if I were to upload one of
these images, Joe, you could go and look at and say, oh,
what would this look like if this part we're in
focus and you click on it and then it would switch,
so it was a new zoom in and see the
shadowy man in the window. That part is not built
in yet, but who knows what could happen to the future.

(30:43):
So that was one super cool kind of futuristic thing
that exists right now. And the lightro camera came out
a couple of years ago, and has you know, sort
of been uh more or less a curiosity, I would
say among a certain like like the tech savvy group
who heard about it early. Uh, kind, I've really dug it.
I don't own one. I thought it was a neat product,

(31:04):
but I didn't actually purchase one, but I didn't enjoy
playing with it. But then there's other like kind of
futuristic ideas, like the idea of being able to use
a camera to take a picture of something that's not
even in the room, like it's in the room next door.
How does that work? Well, you could have an X
ray camera that would work, but it would also be
very dangerous. Are radiating yourself every time you take a picture? Yeah,

(31:28):
not to mention, not to mention your your don't do
it to yourself, you do it to other people. But
if you're taking the photo you're still being exposed to
X ray. Get my lead pants. We're doing photography, uh
lead pants at six Flags. That's that's what I'm picking
up from this. But anyway, so this is a concept

(31:49):
that has been worked out over at M I T.
Settle Children Settle, they called him lead pas. Okay, Joe,
enough enough, we're keeping all of this. It's all going in. Yes,
we are executive decision. It's all kept. Noel, you answer
to me. It's all kept. So um M I T

(32:10):
and M I T researchers were working on this idea
of being able to take images of stuff that wasn't
directly within the field of view of the camera. The
example used was that let's say you're shooting an image
in a room and there's an open door that goes
into another room. Now you do not have an angle
of you into that other room. You can just see

(32:31):
the open door that has opened into the other room.
You take an image with this camera, and then it
starts to collect data and reconstruct what might be in
that other room, giving you an image of let's say
that there's a person hiding in there. You would see
the picture of a figure in that other room, which
is a cool idea. How does this work. It's actually

(32:53):
using very very short bursts of laser light to project
laser light out that some of that laser light hits
the doorway that's open and bounces off of it, and
then will eventually hit stuff that's in the room, bounce
off that back to the door, bounce off the door
back to the camera. Now, the number of rays of

(33:16):
light of this laser light, or the the amount of
information that's coming back is a fraction of what was
sent out. Right, You're you're only getting a tiny little
echo back of what you had just sent out in
a burst. And they're using femto lasers, which means it's
sending out a burst of of light that's a quadrillionth
of a second long. And uh, they actually have to

(33:37):
use a special kind of shutter that closes after they
shoot out this light because they don't want that initial
bounce back to affect the information of the stuff that's
in that of the room because anything that if you know,
if if the laser light just hits the door and
bounces right back to the camera, that's going to ruin
the image. So what does is it The shutter actually

(33:58):
stays closed for a fraction of a second, then opens
up to accept all more all incoming photons, and then
the way it reconstructs the room that is out of
you is it measures the amount of time it took
that photon to come back to the camera. So it's
almost like sonar, but with light um and it's a

(34:20):
really cool idea. The only thing is that the reconstruction part,
again is probabilistic. It's the best guess, which means that
you could get information back because it's such a small
amount compared to what you sent out that there's a
lot of extrapolation that has to happen in order for
you to be able to take a look at what
was in that other room. Yeah, i'd wonder if your
image of what was in the room could be affected

(34:42):
by I don't know, I mean not just solid objects,
but heat and atmospheric composition. I would imagine not I mean,
we're talking lasers, so that's a very direct kind of
it's measuring them the time periods between the photons. Then
it's uh, yeah, I think I don't I mean, I

(35:03):
honestly don't know the answer to that. It may very
well be um, I think most of the time, like
the the at least the example they gave of taking
a picture of something that's happening in another room. You
probably don't have to worry too much about that. Now.
If there's a lot of electromagnetic interference in there, that
could end up playing a part. But I don't know.
Maybe if electro is is trying to play Xbox in

(35:25):
the room next door, that might be a bad thing.
Extra technology combined in with them with with regular old photography,
if you can call something like that regular old photography,
is um is probably gonna going to lead us in
interesting places. Like you know, if all of our all
of our cell phones basically have accelerometers in them these days,
and if you can combine that data with the data
that happened when you took a motion blurred photo, you

(35:47):
could hypothetically correct for so yeah, so what essentially saying like, oh, well,
the camera was moving right to left when this photo
was taken. Uh, here is what it would have looked
like had it been still at the moment that the
photo was snapped. Interesting. Yeah, the cool thing about this

(36:07):
technology is that while we are not in the realm
of zoom and enhanced the way we see it in
movies and television. There's no question in my mind that
we are heading in that direction now. It may very
well be that the images we see when we zoom
and enhance are a lot of guesswork, very sophisticated guesswork.
But I think we're going to get there to a

(36:29):
point where we don't have to wait a certain number
of ten hours to to get an idea of what
this fuzzy photo might be of. Well, in a way,
I would say that I do think in one interpretation,
we are never going to get there because we're still
never going to have information that wasn't no, no, we

(36:51):
will always be limited to what's there and guesses about it. Right,
But but our guesses are getting better, and our way
of recording is getting better. So like we previous cameras
did not have this shoot a femto laser in new
a you know, obscured room, right, And as computers get
more powerful, we're um, you know a lot of the
equations that people are working with right now are things

(37:14):
that have been around since the nineteen hundreds or the
eighteen hundreds. I'm sorry, um, you know, the the Furrier transform,
which is a big one that um that is being
passed around in most of the most of the apps
that you can download to reduce blur. Yeah, that was
a dude who was born in seventy eight. So so

(37:34):
you know that this this math has been around, but
the way that we're using it right now it's pretty
pretty incredible. Yeah. So, uh, we're not going to be
we're not going to be doing any sort of Bones
like technological wonders. I hear that they actually started to
scale back on some of the more ridiculous technological things

(37:55):
they would do in that show. I haven't watched it.
Watch it in any seasons? Yeah, So which one is Bones?
Bones was the one with um David Boreennas as the
cop and uh Emily Emily Deschanelle as a as the
anthropologist and um and they and they had their there
they solve crimes through science, science and quotes science fiction.

(38:19):
They're they're pretty lady. Um computer scientist would would would
be like like, oh hey, yeah, no, I just totally
developed this new computer that does this crazy things. So yeah,
I'm excited to see where camera technology takes us in
the future. Maybe we get to a point where every
image you see you'll just have to keep in mind,
I cannot trust that this moment ever actually happened. We're

(38:39):
kind of already there, aren't we, Because I don't know.
Do you believe that photos you see on the internet,
Like when my friends on Facebook post their wedding photos
is a picture of their new baby. I have to
comment shopped. This has been photoshop? Did you guys see this? Is?
This plays into our conversation a little bit. Did you
guys see the thing I posted? There was a guy

(39:01):
who faked being at Comic Con. Yeah, Like a friend
of his went to Comic Con and he was not
going to Comic Con, but he decided, just for kicks
that he would pretend that he was also at Comic Con.
So he kept texting him going like, oh, hey, I'm
over here in this meeting room, or are are you here?
Oh you just missed me. Oh no, I'm over here now.

(39:22):
But he was. He had been before, so he knew
enough about Comic Con to be able to, uh to
to fake it and say, oh, I'm over at hall
h or I'm over across at the over in the
gas Lamp district getting food and just leading this poor
sucker along the entire time. One of the things he
did was he found he scoured Twitter for images taken

(39:42):
an instagram taken from from comic Con, found one of
these two guys, like it was a couple of celebrities
posing together, and so then he matched himself, uh, standing
in an alley way behind his house, and then photoshopped
himself into the photo, placing the guy one of the
two celebrities, so it looked like he was hanging with

(40:03):
one of these other guys, and then uploaded that and
sent it to his friends, say, I just ran into
so and so here we are together, and it looked great.
I mean it didn't like upon casual glance, it did
not appear to be a photoshoped image. Now, if you
were to look really closely, you'd say the lighting is
really weird because his face has not lit exactly the
same way. But you know, if you're just looking casually,

(40:25):
you wouldn't think anything of it. So Yeah, at this point,
I think you're right, Joe. I think we have to
just assume that everyone's photos of their wedding and babies
and everything is chopped. Yeah, everything is well, guys. Uh,
that kind of wraps up our discussion. What do you
guys think about the future of photography and videography and
cameras in general. Is it's something that's exciting to you.

(40:47):
Are you a photoshop wizard? Do you have lots of
examples of crazy photoshops? Do you want to do some
crazy photoshops of the hosts of Forward Thinking. It's gonna happen.
I might as well ask it happens. Well, guess what.
We have images of all the hosts of Forward Thinking.
You can find those over at our Facebook page. They're

(41:07):
they're up there. You can also if you hunt around,
you'll you'll find photos of us. I can't wait to
see where we end up. By the way, if you
want to get in touch with us, you can email
us our addresses fw thinking at discovery dot com and
go to fow thinking dot com for all the blogs, podcasts, videos,
lots of other interesting material there. We look forward to
hearing from you, and we'll talk to you again really soon.

(41:32):
For more on this topic in the future of technology,
visit forward thinking dot com. Brought to you by Toyota.
Let's go places

Fw:Thinking News

Advertise With Us

Follow Us On

Hosts And Creators

Jonathan Strickland

Jonathan Strickland

Joe McCormick

Joe McCormick

Lauren Vogelbaum

Lauren Vogelbaum

Show Links

RSSAbout

Popular Podcasts

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

Las Culturistas with Matt Rogers and Bowen Yang

Las Culturistas with Matt Rogers and Bowen Yang

Ding dong! Join your culture consultants, Matt Rogers and Bowen Yang, on an unforgettable journey into the beating heart of CULTURE. Alongside sizzling special guests, they GET INTO the hottest pop-culture moments of the day and the formative cultural experiences that turned them into Culturistas. Produced by the Big Money Players Network and iHeartRadio.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.