Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
SPEAKER_04 (00:00):
I think for
especially in the Lightroom and
Cam Rock case, where so much ofthe design principles from the
very beginning has been abouteverything can be undone.
Everything is kind of thisparametric, non-destructive,
uh-based editing environment.
It's really intended toencourage photographers to
(00:20):
explore and try things, right?
SPEAKER_02 (00:29):
This photography
podcast is brought to you by
Frames, quarterly printedphotography magazine.
Here is your today's host, W.
Scott Olson, with anotherfascinating conversation.
SPEAKER_05 (00:48):
Well, hello,
everyone, and welcome to another
podcast from Frames Magazine.
My name is Scott Olson, andtoday, folks, today, this is
gonna be so cool.
I've got to tell you, I amreally, really excited and
curious and cannot wait tounpack what's gonna happen today
because we're talking withsomeone named Eric Chen.
And you may not think you'veseen his name, but I can almost
(01:11):
guarantee you have seen his nameon a daily basis.
From one aspect, Eric is just aworld-class, fantastic
photographer.
I mean, we're talkinglandscapes, urban stuff, travel
stuff.
I'm looking at his website rightnow.
There's Japan, there's Canada,there's Nepal, there's Iceland,
Greenland, the man's been, Idon't think there's a country on
(01:33):
the planet that he hasn't beenin.
And some of the best landscapeand wildlife and travel
photography that I have seen,equally at home in black and
white and in color, the stuffjust pops.
It's just absolutely resonant.
I hate to tell you, I wasteasing him about this just a
minute ago before we hit record.
You know, Eric is also partiallyresponsible for the number of
(01:55):
cat photos out on the web.
Um, but I hate to admit they'reactually really good as well.
Now, beyond the fact that Ericis one of the photographers who
I find absolutely thrilling andcompelling with every single
composition, he's got a sidegig.
He's got this little thing goingover on the side where he is a
(02:17):
senior principal scientist forhang on for it, for Adobe.
This is this is the guy who'sworking on camera raw.
And his projects include stuffthat we use every single day:
highlights and shadows, clarity,my favorite slider of all time,
Dee Hayes.
I mean, this is the guy or oneof the guys behind the work that
makes our work possible.
(02:38):
So we're talking about twothings today.
We're talking about someabsolutely breathtaking
photography and a coder's, aprogrammers, a scientist's
insight into how in the world dowe make that vision really pop
on our own screens?
Eric, welcome to the podcast,man.
How are you doing today?
I'm great, Scott.
(02:59):
Thank you so much for having me.
I'm really looking forward tothis.
Um let's let's start, you know,let's use the Wayback Machine
for here a minute.
How did photography, either fromthe technical side or the
artistic side, assuming they'reslightly different, um, first
come into your life?
You know, running around with aninstomatic when you were seven,
or how did this all start?
SPEAKER_04 (03:20):
Well, it started
with borrowing my dad's old film
camera when I was young and justdoing happy snaps, but it wasn't
until I was in graduate schoolin the early 2000s, I was doing
a just a recreational trip up toAcadia National Park in Maine in
October with uh three of mycollege classmates.
(03:42):
And um this would have been thefall of 2002, I think, or 2003.
And one of them was reallyexcited because he said, Eric, I
just got my hands on a CanonDigital Rebel.
It was like the first you know,digital SLR that was under
$1,000, you know, something thata student might actually be able
to afford.
And uh I was like at the time,great, but what's an SLR?
(04:05):
Because I had no idea.
And I just had never used asingle lens reflex camera
before.
I never used anything of thatcaliber with interchangeable
lenses.
And he let me borrow it for anafternoon and I got hooked.
And then I realized after I wentback to school, after that
weekend, that hey, my schoollibrary has actually a really
cool photography section.
(04:27):
And I started, you know, in myfree time, borrowing all the
books that I could and just kindof learning the basics, you
know.
And then I learned that mygraduate advisor, I was studying
computer science, but I was Iwas gonna ask if there was,
yeah.
I was studying computer graphicsin particular, because I thought
you know it was interesting.
I always had kind of a visualinterest in DOM.
(04:48):
And it turns out my graduateadvisor was really into
photography as well.
And so just kind of between mymy friends having you know
gotten me into it and learningthat my school had a really good
library, and even though all theall the books were about analog
photography, because these wereyou know fairly old books, it's
just they were really good forlearning just the basics in the
foundation, right?
(05:08):
And getting inspired by um someof these you know landscape
photos that I was really into,David Muench, Dak uh Jack
Dakinga and uh Elliot Porter,you know, kind of a lot of the
classic masters of of the form.
Um and then you know, thatfinally got my own camera and
then eventually uh made my wayin as a hobbyist and then uh
(05:31):
eventually through work toAdobe.
SPEAKER_05 (05:34):
Okay, so when you're
studying computer science,
though, what you know, beforeyou before you know Acadia, what
did you think you were going tobe doing with it?
SPEAKER_04 (05:42):
Well, I was studying
computer graphics and in
computer graphics, which is avery broad field, there's a lot
of ways that could go.
That could be about like making,you know, interactive, real-time
game engines for video games,for example.
That's what I thought I wasgonna be doing way back like
that.
Uh that didn't work out.
Um but I did, you know, I wasalso interested in like, for
(06:02):
example, maybe doing renderingpipelines for animation studios,
you know, one uh specialeffects, right?
For T-Series and movies andthings like that.
And um, for example, I was veryinterested just kind of the
interaction of light with ascene, how do you cast shadows
on objects in a realistic way?
Um, and just those are, I think,precursors to kind of my
(06:24):
interest in photography andunderstanding light and
composition.
At that time, I was looking atit more from the technical angle
in terms of how light interactswith a scene in terms of
reflections and light bouncingaround and going through objects
with translucency.
Um, but I wasn't really sureexactly to what use I was going
to put all that one day.
SPEAKER_05 (06:44):
Okay.
Now help me set the scene alittle bit, though.
Because when you were ingraduate school and you get your
first digital camera, you know,we're we're at Photoshop 1.0 or
what was the what was the stateof the electronic world for you
know computer graphics?
I assume industrial light magicwas already up and running.
Um what was the milieu at thattime?
(07:04):
What was considered state of theart?
SPEAKER_04 (07:07):
Yeah, so uh, you
know, raw converters were kind
of new to the scene.
They'd been out, like AdobeCamera Raw had been out for a
couple of years.
Uh we didn't have Lightroom yet.
Uh we didn't have aperture atthe time, right?
This all kind of before all ofthat.
There were a handful of onesthat were out on the market, but
they were all in their infancy,right?
And people were still learningthe basics of what it means to
(07:29):
capture in RAW and process thoseif you were doing digitally.
Um Photoshop was not, you know,we were well beyond version 1.0,
but um, we didn't really have alot of the you know editing
tools that we have today.
And it was still very muchoptimized around the idea of
editing one picture at a time.
Like there weren't you know, ifyou're a busy wedding
(07:52):
photographer and you came backwith 3,000 pictures from a
weekend, I mean, good luck goingthrough all of them one by one
with a you know, editor ofchoice, right?
It just wasn't a very greatworkflow at the time.
Um, and you know, the tools thatwe did have, they were certainly
a lot less sophisticated thanwhat we have today in terms of
what one is able to do with aphoto.
(08:14):
Of course, we had basic thingslike exposure and cropping
adjustments and whatnot.
Right, right.
Um, but I think in general, justthe level of sophistication,
both from the editingperspective as well as the
organization and workflowperspective, these were really
early days.
SPEAKER_05 (08:30):
Oh man.
I remember some of those withwith you know great repression,
I think.
Yes.
Um so I mean one of the thingsI've discovered, you know, over
the years is that a lot in theanalog days, a lot of
photographers fell into theirlove of photography by the
shooting experience, being outin the field or the studio,
holding the camera, pressing theshutter release, that kind of
(08:52):
stuff.
An equal number of photographerswill say that their love for it,
their passion, was found in thedarkroom.
You know, the this magicalmoment where an image starts to
appear on paper.
And both situations really arekind of separate problem-solving
environments.
(09:12):
Um, you know, the problems yousolve in the field are not the
problems you have in thedarkroom, you know, et cetera.
So when you start puttingcomputer science together with
photography, was there one thatwas sort of pushing the other?
SPEAKER_04 (09:27):
Great question.
I really love both angles of itpersonally.
You know, I got into nature andlandscape photography just
through my love of beingoutdoors and wanting to try to
capture some semblance of theexperience of being there,
right?
Photograph is never really asubstitute for the real
experience, but I that was kindof what inspired me to take
(09:47):
those sorts of pictures.
Um, so that you know, it relatesto that aspect of it.
But then I think the computerscience angle of it to your
question, it really tied into mylove of tinkering with what can
you do with a picture once youhave it.
And once I understood what rawmeant, which is basically raw
(10:07):
ingredients, so to speak, likeif one thinks about a baking or
cooking analogy, you know, ifyou have a finished dish, you
can always tinker with it with abit of salt and pepper and
spices and whatnot.
But it's not as malleable orflexible as what you can do if
you had to start from, you know,earlier in the process, right?
Right.
And so once I've learned, atleast at the technical level,
what goes on inside of a rawfile, and realizing that, oh,
(10:30):
there's a lot we can do with thedetail with regards to
sharpening and texture and edgeand contrast enhancement and
bringing out little nuances ofan image, as well as kind of
broader things like how do weinterpret certain colors and
what do we do with you knowsaturated colors that are really
pop and true to life view, butmay not be representable on an
(10:51):
average consumer display.
You know, what do we do withthese colors, right?
Some of them are aestheticdecisions, some of them are more
like a technical decision,right?
We just can't represent them inthis color space or on that
display.
We've got to do something aboutit, right?
And so all of those kind of tieinto uh this, in my view, kind
of a classical left brain, rightbrain merging of aesthetic and
(11:14):
scientific decisions to be madein post-processing.
SPEAKER_05 (11:18):
You know, it it it
might be left brain and right
brain.
It it might be, you know, to usea tech technical term here, you
know, just you know, pushing thecool factor a little bit.
You know, I'm looking at some ofyour images here.
Um, and and you know, this isjust on the front page of your
website.
And let's just, you know, thetop right, there's one from
Iceland.
Just below that, you know,there's another one from
Greenland.
And, you know, ice and icebergsand all that.
(11:41):
Um really, really tough to dowell sometimes.
And I'm wondering if you didn'tsit there, you know, one night
looking at one of your images,going, This, yeah, there's
actually more here.
Or were you looking at the codethinking there's more here?
Um, you know, which problem istastier for you to solve?
SPEAKER_04 (12:00):
I think uh I know
that I now I think have enough
experience, especially with someof those iceberg images, to know
that there's always more data inthe raw file that can be
extracted.
So even if I look at the initialrendition of the file, it's
like, yeah, it's pretty flat.
There's it doesn't look tooinspiring.
(12:21):
I know there's more in there.
Um, especially with somethinglike ice, it is photographed
under overcast conditions, whichI actually think is the
strongest way to photograph itbecause yeah, yeah, me too.
You know, it really brings outthe subtleties in the color.
Um and in those situations, Ifeel that as long as the raw
(12:44):
data was properly captured andexposed, so it's not you know
like wildly underexposed orthings like that, um, there's
just a lot of nuance in theshading of the ice that can be
brought out with, you know,either global or local
adjustments.
TAs you mentioned earlier, isactually something that works
pretty effectively when usedlocally in different regions of
(13:06):
the ice.
And so I think of it a littlebit technically in the sense
that I know what the tools cando to it if the if it looks flat
to begin with.
Um, and part of it is theexperience of knowing I've done
this before.
I know what kind of details youknow I'm looking to bring out of
an image.
And so all I worry about in thefeel I'm capturing is to make
(13:26):
sure uh it's properly exposed.
SPEAKER_05 (13:29):
You know, listening
to, I'm suddenly wondering about
the universe before and afteryour work.
Because, you know, if you've gota bad lens, you know, you're not
gonna, you know, you can you canyou know have some fun with it
in post-production, but it'sstill a bad lens.
Um and you know, people arespending countless hours
(13:49):
debating the merits of stackedsensors, partially stacked
sensors, you know, backside, youknow, all this kind of stuff
before the shutter release isever pushed.
Um and on the back end, youmentioned you know, consumer
monitors.
Um, there's a lot of people whoare still looking at pictures
just on their phone or on youknow old school monitors.
Do you find yourself hamstrunginspired, working in tandem with
(14:13):
those other steps in theprocess?
You know, the people designingsensors and lenses and the
people designing monitors?
SPEAKER_04 (14:20):
Yeah, it's a unique
challenge because just the great
variety of devices, both on thecapture and the display side.
Um, I think the upside is thatas we've been working with a lot
of our hardware partners outthere, over time we've gotten
better tie-ins into Lightroomand Camera Ot to improve certain
things.
For example, around 2010, one ofthe major advances that were
(14:45):
made around kind of the adventof mirrorless cameras, right?
Was around the idea that, well,we can make the package smaller
by taking some of the thingsthat we used to build into the
lens and try to do it insoftware.
Now, the cynical might listen tothat and say, well, you're just
trying to make cheaper glass sothat you know you don't have to
build, you know, um, put all thelens elements for fixing things
(15:08):
like distortion inside the lens.
But I do think that that's areasonable trade-off because
what I think what lens designersunderstood was that there's
certain things like sharpness,which are hard to fix in post,
but there are other things likelight fall-off in the corners,
which is relatively easy to fixin post.
And so they started to makedecisions like how can we make
this lens smaller, yeah, easier,you know, more compact, better
(15:31):
for travel, for instance, as oneof the options to offer, as
opposed to, well, every lens hasto have a big front element that
makes it bulky and you know hardto store.
And because we have thesetie-ins now in the Lightroom
where we can read metadata froma lens or the raw file that
says, here's how the lens wasdesigned to be.
We can fix certain thingsautomatically uh when you load
(15:53):
the image into Lightroom.
I use the term fix a little bitkind of casually here, but what
I really mean here is thatthere's certain operations that
they used to build all into thelens and they made all the
lenses much bigger and much moreexpensive.
And I think it's actually reallygood now that there are
different options forphotographers, right?
You can still buy the super topgrade lenses that cost a lot
(16:14):
more if you're really into that.
But people also now have theoption to get the kind of more
accessible, compact,travel-fronting lenses that may
not be quite as good optically,but they know that the images
still look pretty good when youcome into Lightroom by default
because of the technical tie-inswith things like we know what
the residual distortion andaberrations are, and we can try
(16:36):
to do as much of that correctionfor the photographer
automatically.
SPEAKER_05 (16:40):
This is so cool.
Um I have to ask, so somebodycomes out with a new lens.
Do you run down to your localcamera store, buy it, bring it
back into the lab, put it inyour machine and analyze it?
Or is there a long-standingconversation between the
manufacturers and you guyssaying, you know, I know you're
going to build a profile, youknow, let's talk in advance.
SPEAKER_04 (17:01):
Yeah, it's all of
the above, Scott.
So in the past, well, it keepsthe job interesting, right?
So in the past, um, back beforewe had long-standing ongoing uh
partnerships with camera andlens vendors, back when we were
starting to experiment withbuilding lens profiles, you
know, we did tend to go eitherbuy or rent lenses to try to
(17:25):
understand what was possibletechnically.
And, you know, at the time, uh,we tended to have to profile the
lenses in our lab ourselvesbecause of the fact that the
lenses were historically for uhSLR film SLR systems where there
wasn't any real electroniccommunication with the body
(17:46):
other than things like autofocusand focus position.
Uh now with a mirrorless system,what's really interesting, and
now of course they're soprevalent, is that um the the
lens mounts have much richermetadata information that goes
across them.
So typically what happens isthat you know a given lens,
let's say a new 24 to 70 thatcomes out, will have some
(18:09):
characteristics, but thatinformation is actually stored
digitally in the lens itself,um, in a little piece of
computer memory that's in thelens.
And when you capture a raw filewith it, that information is
sent across the lens map to thecamera and then it's stored in
the raw file, and then it's in away, uh, in a format that we on
the Adobe side can read andapply the processing in
(18:31):
Lightroom, corresponding to thatlens, not only to that lens, but
specific to your particular uhcapture condition.
So you use that 24 to 70 lens,that 35 millimeters, F4.
You know, we know that, right?
Because of all the XFX time.
But all of this only works if wehave a good long-term
relationship with the lensvendor.
(18:52):
So what we've been doing for thepast 15 years is really building
and establishing thatrelationship.
So these days, the way it worksis a lot more like the latter of
what you suggested, which isthat you know, a lens comes to
market, it's announced, we oftenwill work with them to try to
get the profile built in advanceso that by the day you can go to
your favorite store and pick itup, uh, we should already have a
(19:15):
profile inside of the libraryfor you.
SPEAKER_05 (19:17):
I was I was gonna
say, because you know, the I I
want that profile there theafternoon I buy the lens.
SPEAKER_04 (19:24):
That is the ideal we
strive for.
I can't say uh we did we do itall the time, but we that is our
ideal.
Like we want zero-day waiting.
And I would say we're these daysare about as close to that as
we've ever been.
So I'm proud of our team forhaving done that.
But it did take a while to getthere.
SPEAKER_05 (19:42):
Okay.
From from a technicalstandpoint, I mean your credits
include you know, highlights andshadows, clarity, de haze,
profiles, you know, lenscorrections, all this kind of
stuff.
I d that's magic talk to me.
I you know, I you could show mea thousand lines of code and I
would not understand one ofthem.
Unless it was the old, you know,computer science 99 if-then
(20:03):
commander.
That one I got.
The rest of it's just beyond me.
How do you work on highlights?
And I don't mean from a you knowmaking your picture better.
From a coding standpoint, how doyou work on highlights?
What do you what are your goals?
SPEAKER_04 (20:18):
Yeah, so something
like highlights is a really
interesting one because it's oneof those controls that really
affects the overall balance oftones in a photo.
So you know, earlier versions ofsomething like highlights, like
initial drafts would have beensomething as the analogy I would
(20:40):
make is something like, well,it's a bit like an exposure or
curves control, except it onlyaffects the upper range of the
picture.
When we experimented with that,the upside is that that's fast
and relatively easy to implementfrom a coding perspective, but
from an imaging and visualperspective, it was unsatisfying
because it tended to make thetones look too flat.
(21:04):
And it basically looked as ifsomeone had kind of compressed
the tones in a way that you justcan't see them that clearly
anymore.
So highlights end up justlooking dark and muddy and gray.
I had a one of my photographicheroes, Charles Kramer, uh, he
used to call that tonalconstipation.
Basically, uh all pushed uptogether and like nobody likes
(21:25):
the way it looks, you know?
Um so as like so.
That was like when we asked whatour kind of guiding principles
and what were our group goalswere.
One of them, kind of informallyamong the team, was if we're
gonna have you know highlighttone mapping and have the
highlights bring it down, wedon't want tonal constipation.
That's like we don't that's thethat's nothing to be avoided.
(21:45):
And so that leads us toexploring very different
techniques, uh, techniques thatwe call um that I think in the
computer graphics and imagingliterature would be broadly
classified as uh localadaptation, where you basically,
I think the general way to thinkabout it is imagine that every
pixel in the image had adifferent curve applied to it.
(22:06):
So it's not like one curvethat's applied to every pixel,
but it's like different parts ofthe image will get different
curves.
And so then the guidingprinciple for developing the
method behind highlights is howcan we do that in a kind of
photographically coherent way?
And so the techniques change alot based on kind of the visual
goals.
Uh and that's uh this was notechnology between or behind
(22:31):
highlight shadows and claritycame from one of my teammates in
Adobe Research at the time, whohad developed a new method for
doing uh tone compression, whichI felt was state of the art at
the times.
This would have been 2011 or so.
Yeah.
And uh so he was working on apaper that was eventually
published in ACM Siggraph, avery prestigious computer
(22:53):
graphics journal.
And I was working on theimplementation for Cameron
Lightroom.
And then once you have the basicidea in place, I think it
shouldn't be underestimated thatthere's just a lot of time spent
tuning.
And by that I mean it's likeparameter tuning.
It's like, you know, this as youknow, the slider goes from minus
100 to 100.
(23:14):
Sure, but what does like minus50 do exactly?
You know, like how far does itgo?
And how does it work on imagesthat are high contrast versus
low contrast?
How does it work on portraitsversus landscapes, urban scenes?
You know, there's just a lot oftesting to do and tuning.
Um I would say it, you know,it's like the what is it the was
it the Edison quote, which islike 1% inspiration, 99%
(23:37):
inspiration.
I feel like you know, it's 1%idea and 99% tuning.
SPEAKER_05 (23:42):
Yeah.
Yeah.
Yeah, I I I have this visionbecause you know, debugging code
is is is one thing, but havingit actually look good.
Is there a mission control in abasement somewhere with a super
giant monitor?
You know, you you tweak the codea little bit, then you look up
at it, and yeah, I don't likethat so much.
I mean, I mean, how do you judgesuccess when you're working on
these projects?
(24:02):
Because the code can be clean.
SPEAKER_04 (24:03):
Oh, it's so hard,
Scott.
It's so hard.
Well, I think part of it, soreference monitors and reference
environments are useful as abaseline, but uh to the point
you brought up earlier aboutlike people have different
consumer displays and peoplelooking at things on their
phones.
The reality is, for better orworse, there is this very
heterogeneous environment whereour customers are on all these
(24:27):
different devices, right?
Old Windows laptops, shiny newphones with bright displays.
Some people have are luckyenough to have these high-end
HDR displays and so on.
So I think our measure ofsuccess is we actually have to
evaluate across these devicesbecause that's how our
customers, I know photographers,are going to be looking at them.
(24:48):
Um, not everyone has you know areference studio environment
with you know a recommendeddisplay to do something, right?
And the reality is a lot ofphotographers have multiple
environments, right?
They might, even those who dohave a reference studio
environment, they're probablyalso looking at photos on their
phone, right?
And those are very different.
So our measure of success isreally looking at a diversity of
(25:11):
things.
So we have for every featurekind of a target.
Like, here's like kind of ourtop um audience that we're going
after.
For example, of a feature liketexture, right?
Which came out a few years ago.
Love that, love that one.
It's used for a lot of differentthings, but it was originally
targeted specifically for skinsmoothing, like portrait
(25:32):
retouching for skin.
You can use it for other thingslike rocks and texture and
foliage and grass, but it wasreally targeted for that.
So we spent a lot of timeevaluating how it worked on
portraits, different skin tones,different lighting conditions.
And then we tuned it secondarilyfor other subjects as well.
But that's an example of what weidentified this was a need, this
(25:52):
was something that was heavilyrequested, and we tuned it
specifically for that.
So when we looked on differentdisplays in different
environments, we focusedprimarily on portraits.
SPEAKER_05 (26:02):
Going back to your
own photography for a moment,
you you've got a picture fromPortugal in 2013, uh, an image
that I just love.
It's a pathway in a forest.
There's a lot of fog, there'strees.
Um do you remember this picture?
You know which one I'm talkingabout?
Okay.
That, I mean, it's it's abeautiful, beautiful image.
It's also uh, you know, kind ofa challenge, you know, if you're
thinking of post-production, youknow, because you've got just
(26:25):
about everything possible goingon in that picture.
So walk me through two thingsfor that one image.
Walk me through the fieldexperience, taking the picture,
you know, and how you createdit.
But then, okay, you're backhome, you got a glass of wine by
the side of the machine.
It's like, now what the hell amI going to do to make this as
beautiful as it becomes?
SPEAKER_04 (26:45):
That's an
interesting trip, Scott, because
uh, first of all, it was doneduring kind of a spring break
trip where we didn't know whatto expect there.
It was my first trip there, andit ended up being really foggy
on most of the days, just ingeneral.
Um, it was also interesting froma capture perspective in the
field because this is my firsttime borrowing uh one of my
(27:09):
teammates' Sony Arcs 1 cameras.
It's like a fixed-field 35millimeter uh lens and uh you
know, no zoom, right?
Because it's just a fixed primelens.
It's my first time actuallytrying to use a prime lens for
kind of a landscape shipping.
And so it was actually my firstexperience training myself to
photograph for an entire weekonly at 35 millimeters.
(27:32):
So it's kind of an interestingexercise, but this was actually
the first image that I made inthat mindset that kind of
convinced me that, hey, youknow, maybe life at 35
millimeters only is okay.
You know, I can it started tomake me realize oh, this is why
maybe some people are reallykind of gravitating to this
vocal link.
Just kind of the relationshipbetween the branches and the
foliage and the path and all ofthat.
(27:53):
So yeah, just trying to composeand get these elements in a
somewhat pleasing arrangementwas my main challenge for that
in the field.
Um, and then once I got back tothe desk, editing it was mostly
about trying to preserve thefeeling of contrast.
Uh, like not that much contrast,just enough local contrast so
(28:14):
you can see the distinctionbetween the different elements,
like the branches that arefading in the background and the
path that's receding, but likenot enough contrast that you
lose the feel of the fog.
I think that's one of thepitfalls of low contrast images
is that if you have uh a patternof stretching the histogram to
feel, kind of to fill the space,so you kind of set your black
(28:36):
point, your white point, ittends to destroy the feeling of
or the mood of the image.
Um, so for this one, I tried tobe more judicious with that.
You know, I would use a littlebit of with the brush or the
radial filter to add a littlebit of contrasting there, but I
tried to do it subtly.
SPEAKER_05 (28:53):
I I was gonna ask if
you were doing global changes or
you had a lot of masks in there.
SPEAKER_04 (28:57):
Yeah, I mean, I
would say the in this case,
global changes were really quiteminimal, maybe just a little
bit, a little bit of a crop.
But most of the changes weremade with local things just to
kind of rebalance the light alittle bit.
Um quoting Charles Kramer again,he often used to talk about uh
re-orchestrating the light.
(29:18):
You know, he he was a musician,so he liked to talk about things
that from the point of view ofsomeone who's like trying to
influence the balance of soundin orchestra, right?
Oh, this side is too loud, thatside's too quiet.
How do you kind of rebalancethings to be more harmonious?
And I think for an image likethis, where you have kind of
light coming through the fog,uh, it does tend to naturally be
(29:39):
too strong on one side of it.
So a lot of it was about justtrying to tamp down the edges a
little bit, bring out some partsof the branches so that you can
see them a bit better.
Um it's really kind of not a lotof major tweaks, but more like
little small tweaks here andthere.
SPEAKER_05 (29:55):
But so many of those
small tweaks can ruin the image
as well, make, you know, make menot believe it, make me not, you
know, say, you know, this issomething I want to fall into.
Um absolutely wonderful shot.
Um, you were making me think asecond ago, you know, about you
know the stuff that's coming offof sensors.
Is there a lot of informationthat a sensor captures that we
never see?
(30:15):
And you know, it's not a part ofuh even our imagination of what
the cameras can do?
I I know at least you know acouple of manufacturers, the
sensor is capturing infrared,um, which, you know, unless
you've got a filter, we don'tsee.
Um, but is there a lot of justdata that's just that that's you
know not part of our vocabularyyet?
SPEAKER_04 (30:35):
Yeah, there's a lot
of data.
I would say there's data that iscaptured, but we never show
directly to the photographer inLightroom.
So, I mean, some of this is justkind of uh kind of details how
the profiles work.
But for example, if youphotograph with a Fujifilm X100
series camera, you know, what'sreally the sensor is organized
(30:57):
at is this repeating six by sixcolor filter array, um, which is
what Fujifilm calls Xtrands.
Um that pattern of recordingpixels is not something we
directly show to the user aslike we never show that six by
six version of the photo to theuser.
It always goes throughadditional processing before we
show something to thephotographic.
(31:19):
And part of that, I think, isjust from a workflow
perspective, there's not a lotof advantages to showing like
the earlier stage image that hasall this, you know, it would be
it would be a very strangething, I think, for most people
to see visually.
It's like the six by sixcheckerboard, that's a black and
white, and some of thecheckerboard elements are
bright, and some of them aredark.
It's like, what is this?
It's not even my photograph.
(31:40):
But from an image processingperspective, that is absolutely
essential because that's how weend up, you know, inter uh
interpolating and getting thedetails and colors right for
such images.
We kind of have to know thosedetails on the engineering side,
but that's like something that'shappened almost under the hood
and automatically, right?
(32:00):
It's not something thephotographer necessarily needs
to be aware of in order to usethe camera or edit the pictures
successfully.
SPEAKER_01 (32:09):
Let's take just a
quick break.
We hope very much that you areenjoying today's episode.
The very fact that you arelistening to this podcast
suggests that photography meansa lot to you.
And if that's the case, youmight want to have a look at
Frames, quarterly printedphotography magazine.
We truly believe that excellentphotography belongs on paper.
(32:30):
Visit readframes.com to find outmore about our publication and
use the coupon code POMDCAST toreceive a recurring 10% discount
on your new Frames magazinesubscription.
And now, back to today'sconversation.
SPEAKER_05 (32:52):
I've also wondered,
you know, about um having to do
with the um denoise um featuresthat you know and how good they
have gotten recently.
Is there a kind of unintendedeconomic effect upstream?
Because if I've got a 20megapixel camera, but I've got
the denoise slider, why would Ibuy a 100-megapixel camera?
SPEAKER_03 (33:15):
Well, I mean, people
buy 100 megapixel devices for
all sorts of reasons, don'tthey?
SPEAKER_05 (33:20):
No, but but in terms
of image quality, you know, it
it it is that feature now um isso cool.
SPEAKER_04 (33:29):
I think it really
opened up a lot of
possibilities, right?
So um I think from uh imagingperspective, uh it's really
enabled us to I think forphotographers in terms of what
cameras they use and what ISOranges they use, it's really
given them more flexibility.
I think for high-resolutionsensors, which do tend to be
(33:52):
noisier per pixel, the upside isthat they have the option if
they're using them at evenmoderate ISOs.
Like if you have one of these 60to 100 megapixel sensor images,
uh they do tend to have noiseand you zoom in, even at you
know 800 to 1600 ISO.
And even used at those moderatelevels, I think denoise is much
(34:13):
more likely to be able to cleanthem up in a way that preserves
the actual detail of it.
As to whether you know thelenses themselves hold up to 100
megapixels, I think it's maybe aseparate question, right?
You know, like obviously, if oneis not using a really, really
good lens with really, reallygood technique, it's not one's
probably not getting a full 100megapixels worth of data in the
image, right?
(34:34):
And then you just end up withyou know soft noise that has to
be cleaned up.
It's not really any better thanusing a good 20 megapixel
sensor, right?
Um I personally think the sweetspot still for full frame
sensors is around the 24, youknow, to 36, somewhere in that
range, you know.
Like uh you can get cleanerresults with 20 and below, and
(34:55):
um, but generally for uh I thinka lot of photography, you know,
that middle spot still remains areally good place to be.
And denoise really works well inthat range.
SPEAKER_05 (35:05):
Do you ever do kind
of you know kind of um I want to
say market studies, but that'snot what I mean.
You know, real use studies andand find oddities out there,
like somebody who you knowdenoises you know the hell out
of something and then puts in50% grain.
You know, that's really roughand big.
Um I mean, do you find people'shabits are are sometimes in need
(35:27):
of tweaking as well?
SPEAKER_04 (35:29):
I think yes,
absolutely.
So some of the well, I don'tmean in terms of needed
tweaking, but they they use thetools in ways that we had not
anticipated, right?
Or they use the orders.
So, you know, I think even veryhigh-level basic interactions,
right?
For example, do people tend touse the you know, exposure and
contrast controls first, or dothey do cropping first, you
(35:52):
know, and go back and forthbetween them?
There's an interestinginteraction between all of these
things because some of thecontrols have behaviors that
depend on what the croppedresult of the image is, right?
Um so for example, vignettes payattention to where your crop
rectangle is, how the removetool works depends on whether
the image is cropped or not, andthings like that.
(36:12):
And so there are actualpotential choices that affect um
how a user might choose to addto controls based on the order
in which they do things, right?
And so that's that's one of theinteresting interactions.
So even though we can encouragean order, maybe like by the
order in which we place thecontrols in the right.
SPEAKER_05 (36:32):
I was I was gonna
ask, is the ladder there on the
right?
Is or is that your preferredorder of processing?
SPEAKER_04 (36:39):
For example, in the
light and color panels or in the
basic panel and light andclassic, they are put in
generally in the top-downrecommended order.
But the reality is people bounceback and forth between them and
sometimes will jump to one atthe bottom before they go back
to the top, right?
And those habits are verydifficult to change, especially
if you found something thatworks for you.
(36:59):
So we really try really hard notto prescribe any particular
order of the controls.
But there are some, especiallynow with a lot of the more
recent controls like denoise orones that produce bitmaps of
images, that um they haveinteresting and challenging
interactions between them,right?
Like if you were to denoise animage, but you've also done some
(37:22):
removal clone and heel type ofoperations, well, you want to
make sure that those spots arealso properly denoised
afterwards, right?
And so it's just kind of achallenging interaction between
these um sophisticated controlsthat could be uh that produce
bitmap images, just because youknow, if you change one, you
kind of have to change theother.
SPEAKER_05 (37:44):
You know, I I'm
chuckling because I was just out
this morning and we had areally, really foggy morning and
and took some shots.
And so you you gotta know thefirst thing I went for was De
Haze.
Um, you know, when I called themup.
You know, well, well down on thetotem pole there.
SPEAKER_04 (38:00):
Yeah, De Haze is uh,
you know, it's we have it in
actually a couple of differentplaces, is an interesting story
there, right?
In Lightroom Classic, it's thereas part of the basic panel.
Um, in the rest of the Lightroomapps, including Camera Raw, it's
uh it's in the effects panel.
Uh and it's one of theseinteresting conversations where
(38:20):
uh internally we debate it ornot, you know, is T Hayes kind
of a top-level tonal control,like exposure and contrast?
That's an argument for having itin basic.
Or is it more of somethingthat's more for, I don't know,
creative effect and then itbelongs in effects, like
vignettes and things like that.
And you know, we had people onboth sides of that fence, and uh
(38:41):
as you can tell, we didn't allagree because we have different
products that put it indifferent places, but
ultimately, regardless of whereyou put it, you know, people
know what they're looking for, Ithink, when they try it out,
which is they're looking forthis interesting mix of, you
know, the the scene comes inmaybe a little bit obscured for
whatever reason, right?
(39:02):
Sometimes it's because there'sjust a little bit of atmosphere,
sometimes because it really isfoggy and it's very obvious that
way.
Sometimes they're photographingthrough an object like some pane
of glass or something like that,where there's just something
that's reducing the visibilityof it.
And Hayes runs a little analysisof the image to figure out kind
(39:23):
of a mask of what is obscuringthe image, and then tries to
subtract it out.
And uh so in a way, likevisually, I think of it as a mix
of like the black slider withsaturation and contrast.
SPEAKER_05 (39:38):
When you're working
on a on a project or an idea, uh
are you are you working sort ofcross programs?
I mean, you d do you know thatthis is gonna be for Photoshop
or Camera RAW or Lightroom?
Um, or is or is theimplementation in which package
a kind of separate decision?
SPEAKER_04 (39:55):
That's uh that's a
very feature by feature based
decision, I think.
Um technology-based things.
So certain things, for example,like technologies to remove an
object from an image or replaceit with something else.
Those are things that tend to becross-product or even cross
cross-um business units atAdobe, because there's a general
(40:19):
need, for example, to like to dothat kind of editing for
different types of things,whether it's photography or
graphic illustration designproducts, they all kind of
benefit from that type oftechnology in some form or
another.
But the way it manifests insideof a product tends to be very
product focused.
So as an example for like thegenerative remove feature for
(40:40):
Lightroom, there are versions ofthis in other products like
Photoshop, but the version inPhotoshop tends to be associated
with a text prompt, right?
The user can say, well, I haveto write a mask.
I want to fill it with bubblesor something.
And for photography, especiallyLightroom and Camera Raw, we
made a very conscious choice notto do that.
(41:01):
We think, well, no, your photoitself should be the source of
kind of truth for what you knowyou're editing.
And so we don't want people liketyping in, you know, insert, you
know, some rabbit with fancyears on this photo.
Um, you know, it's like so we sowe kind of very deliberately did
not put a text prompt inDegenerative Remove.
So even though the technology isshared, the way it manifests in
(41:25):
a product is very case-by-case.
SPEAKER_05 (41:27):
I I'm chuckling
because I'm looking at a picture
of a rabbit with one of theears, and there's no ears in
this picture.
You've got it cropped rightaround his face.
SPEAKER_04 (41:35):
None of us done, you
know.
Uh no labs were harmed in makingof that photo.
SPEAKER_05 (41:41):
It's a it's a great
picture.
Um, you know, and and and andI'm really you know fascinated
by this because I, you know, 99%of my work is in Lightroom.
And and I use Photoshop for twothings text boxes and the canvas
sizes.
Um and and you know, if thosetwo were moved over into
Lightroom, I would probablynever use Photoshop at all.
(42:01):
For me, I mean but you know,they I'm sure you know um I
might develop another workflowbased on a potential that's over
there.
But you you know, you're you'reyou're dead on.
I mean, Lightroom really is, youknow, photocentric.
Photoshop is photopotential-centric.
How about that?
Um, you know, what what nextstep can I do with it?
What what advice have you gotfor people?
(42:23):
You know, both beginners andpros, you know, who are sitting
down.
I mean, a pro may sit down atany of these programs and say, I
know what's here, and you'regonna shake your head and go,
no, you don't.
Um is there advice across theboard that you would give to
users of your products?
SPEAKER_04 (42:39):
I think for
especially in the Lightroom and
Cameron Rock case, where so muchof the design principles from
the very beginning has beenabout everything can be undone,
you know, like you're not thenon-destructive job.
Exactly, right?
Everything is kind of thisparametric, non-destructive
uh-based editing environment.
(43:00):
It's really intended toencourage photographers to
explore and try things, right?
And um, so even if you are aseasoned professional or very
experienced photographer, Ithink because the tools are
changing and improving all thetime, it's a lot of fun to go
back and revisit old photos andsee, like, well, not only if the
(43:20):
tools change, you might be ableto do more with them, but also
just if time has passed, um,your vision for that photo may
have changed, right?
SPEAKER_05 (43:28):
And so Eric, do you
have any idea how many hours
I've spent looking at picturesfrom five, 10 years ago
thinking, oh, now I can do this?
SPEAKER_04 (43:37):
I know, or I
remember how hard it was, how
much a pain in the rear it wasto do that.
And now this tool just like itfeels unfair, right?
Yeah, I think I mean, I thinkthat's actually because there is
a lot of um the temptation,like, well, you just take new
photos, take new photos, andit's always looking forward to
(43:58):
the next picture.
But I think it's also worthlooking back at existing photos.
And part of it is uh it's funbecause you know, you like those
photos and it's fun to spendmore time with them, but also
from a practice point of view,in terms of improving one's
skills with the tools andfiguring out is there a better
way to do something.
It's also fun to kind of uh justum like there's nothing wrong
(44:23):
with holding on to both versionsof a photo, right?
It's a little bit like me goinglooking at the number of times
Anvil Add printed his, you know,monolith half-dome, you know,
picture, right?
It's like there's just so manydifferent versions of that from
different years on differentpapers and different visions for
how he wanted that to beprinted, right?
Uh or any of those.
(44:43):
I think it's just uh you can dothat digitally as well.
And I think that's one of themost fun things one can do
besides taking new pictures.
SPEAKER_05 (44:52):
Yeah, I'm looking at
your website, and and because of
my own you know personal lovefor black and white, that that's
the part I'm looking at rightnow.
And I'm wondering, you know, inLightroom at least, you know,
when I click on the little blackand white thing up there, it's
still an RGB file.
You know, I've still got allthose color tones just hiding
back there.
From the from the you know, thecoding side, from the technical
side, is dealing with black andwhite any different than than
(45:15):
dealing with color, or or is ita completely different ballgame?
SPEAKER_04 (45:19):
Uh yeah, I think it
is a little bit witch.
And that a lot of the you know,the cues that we have for like
bringing out contrast are reallyflattened into one dimension,
right?
So it really puts a lot ofburden onto controls like
clarity, highlights, shadows todo the heavy lifting of things.
(45:40):
I think it also really increasesthe importance of the masking
controls.
Again, because globally, if youwant to relies only on global
controls with a color image,it's like, well, if you're not
seeing enough differentiation intone, maybe you can use one of
the you know HSL mixer colorcontrols to bring things.
Right.
And in black and white, there'sjust a lot less freedom to do
(46:00):
that because it's black andwhite.
So I think being able to have arich masking experience that
works with black and white well,I think is really important.
And uh one of the things I wasreally happy with, I think it
was a couple years ago, wefinally introduced uh curves as
a masking adjustment, right?
So I think that really helpedblack and white in particular.
(46:22):
Like, you know, if you've gotlike one cloud in the top right
of the picture that's just notstanding out enough from the sky
or the mountain that's above, orsomething like that, you can
just go and tweak that onething.
And it's a lot easier to do thatwith a curve uh than with other
tools.
SPEAKER_05 (46:37):
How tough were the
new landscape features from you
know from a technicalstandpoint, from a coder's point
of view that just came out?
SPEAKER_04 (46:43):
Yeah, I think you
know, a lot of these, you know,
scene-based things arechallenging in a couple of ways.
One of which is that there'sjust a great variety of content
out there in terms ofcombinations of elements that
are trying to tease out, youknow, fully to water, mountains,
and and so forth.
And um, at the same time, havinga dedicated model for each one
(47:07):
of these elements is not reallytractable yet from a performance
standpoint.
Because if you imagine runninglike seven different models to
find seven different landscapescene elements, it would just
take way too long to run.
I think the main challenge forus for that was to try to
balance this.
How can we keep the running timeaccessible for you know most
(47:30):
devices that we know that ourcustomers have, but also produce
a sufficiently high level ofdetection success, right?
I think one of the um measuresof success for such a feature
from the technical side is notlike is the mask edge perfect,
but whether when you do anadjustment to that mask, it
(47:52):
actually looks like a goodphotograph.
Like in other words, does itlook photographically plausible?
You were talking earlier abouthow you know doing a lot of
masking-based adjustments canactually be a pitfall, right?
It can make its image look veryobviously fake or it doesn't
look like a cohesive photographanymore.
To me, that's actually the mainchallenge with a lot of these
landscape, subject, sky,people-based masses like they do
(48:15):
tend to break down a little bitif you push the adjustments too
far because it doesn't look likeit was all taken as one
photograph.
It starts to look like acomposite.
Right.
And so we that's that's a linethat we're always trying to
balance.
SPEAKER_05 (48:28):
Is there a holy
grail out there for you guys
these days?
Is is is there uh an effectthat's that's just beyond being
released yet?
SPEAKER_04 (48:37):
You mean like a
visual editing control for um
that we're in terms of like whatwe're trying to build?
SPEAKER_05 (48:44):
Yeah.
What is a visual editingcontrol?
SPEAKER_04 (48:48):
Yeah, so I think
just um, you know, Lightroom is
quite mature at this point.
We have a lot of thefundamentals there.
There's always a lot of thingsthat can be improved and
iterated on.
Um I think uh a big area offocus for now, looking ahead, is
(49:10):
um, you know, you were sayingearlier about how photographers,
some people really gravitate tothe capture process and others
really gravitate towards theediting process after the fact.
And looking at the second part,I think there's a lot more we
can do in terms of having thetools assist better how to edit
(49:31):
photos.
Um, I'm thinking even for youknow busy professionals who've
have a lot of experience editingphotos.
Presets today, for example,they're very helpful for setting
controls to a fixed set ofvalues, but they're not
particularly adaptive to theunderlying image content, right?
Because they're just copy andpasting fixed fighter values.
(49:51):
You know, I think we're veryinterested in kind of expanding
that capability to work, be moreflexible, um, to be able to kind
of, you know, you can makesomething that um, you know, is
for your particular style, theway you like to edit them and
then apply it in the future in away to a group of related photos
and have it be um similar to howyou edited the first one.
(50:12):
Um I think there's a lot of kindof uh assisted ways to improve
workflows and to improveediting, which is not really
about um like a new tool thatlike creates a different kind of
picture you could have donebefore, but it's really more
about improving the workflow andmaking it easier.
SPEAKER_05 (50:29):
Okay.
SPEAKER_04 (50:30):
Um part of that I
think is because we hear a lot
of people are they like tocapture the feel, but they want
to reduce the amount of timethey spent editing, right?
SPEAKER_05 (50:40):
Yeah, it's yeah,
that has always fascinated me
because I I am not a fan ofpresets.
I'm not a fan of um, you know,even the recipes, the profiles
uh that are in there.
Um because you know I I love theability, and you know, I love
the experience of sitting hereat my computer uh and playing
with things.
Um but I I workflow, you know,speed.
(51:03):
I to make it faster to me isjust wrong.
I don't want to be too muchslower.
Um but I I want to consider theway I would you know in an old
school, dodge and burn carefullyand spend hours on a print.
I don't want to spend 35 secondson a on a on a file.
I want to spend consideration,maybe not time, but
(51:23):
consideration.
SPEAKER_04 (51:25):
Yeah, that's a very
I I relate to that a lot, Scott.
And I think it's I I've found inrecent years that I myself as a
photographer experienced bothsides of that because there are
some of my images, especiallythe ones that you've been um
seeing on my website, those arein the former camp where, yeah,
(51:49):
those are all done by hand.
I didn't use presets on those, Iconsidered those, right?
Those were you know tinkeredwith and I enjoyed the process
of doing that.
I don't, I'm not looking tospeed that process up.
Uh, but then I've also been onthe other side of it with images
that I don't consider belong inthat category.
They're maybe more like happysnaps in my life, you know.
(52:11):
Like uh, you know, I like tomaybe collect pictures of funny,
you know, signs that I've seenin cities that I travel through
and just have them be part ofmy, hey, look at this funny
thing that I saw.
Um, those are not ones that I'mgonna spend a lot of time doing
masking and things like that on,right?
But I do want them to have maybesome consistent visual style of
some sort.
And I can imagine havingsomething that goes through
(52:33):
those more quickly, you know.
And so like I think depending onthe use case, right?
Uh, there's kind of something init for everyone.
SPEAKER_05 (52:42):
Are we gonna get to
the point, you know, with my
phone, I can be walking alongand you think, you know, I
wonder what kind of tree thatis.
Whip it out, there's the littleGoogle search, she'll say, Oh,
that's a sycamore.
Um, you know, are we gonna getto the point where in Lightroom
I can, you know, almost like inthat scene in Blade Runner where
he's doing the voice commands todo the pictures, you know, say,
you know, computer, reduce allthe sycamores 15% of contrast.
SPEAKER_04 (53:05):
Yeah, you can like a
voiced or prompt-based way of
doing editing.
Yeah, minority report style withuh yeah, you know, I think uh
certainly a lot of the ongoingkind of chatbot-based, you know,
large language model editing umexperiences are are looking that
(53:26):
way.
I think that the jury's still upon kind of how that experience
will ever show up, you know, inLightroom in terms of how that
would look.
But I what's that?
SPEAKER_05 (53:35):
Will I be able to
will I be able to tell it to
change the sycamores some degreeand the the evergreens some
other degree?
SPEAKER_04 (53:42):
Yeah, I don't know
if it would ever be like text or
voice prompt based like that,but I do think if you've seen
like you know, fleck landscapeum features work, right?
It could be very uh much morespecific than just trees, right?
It could be a lot more specific.
SPEAKER_05 (53:56):
Okay, yeah, because
you got vegetation now.
SPEAKER_04 (53:58):
Yeah, vegetation,
right?
It's just not as specific to thespecies or things like that.
Right.
You can totally imagine you knowgetting a lot more specific and
having ways of editing this asyou know, take the top right
part of the sky and make it alittle bit more like the top
left part of the sky.
Um, or yeah, like you say, youknow, I want the maybe the
brightest leaves of the sycamoretrees to be toned down 15%, or I
(54:20):
want you know the shadows to bedenoised 10% or something like
that.
I feel like all of these sort ofmethods, as long as the
photographer retains thecreative control over how the
image looks, um, I think thoseare good directions for us to
explore.
Um, I do think a lot of times,especially new photographers who
have a vision for how they wantthe image to look, but they
(54:42):
don't have the vocabulary yet toknow how to describe it, this
would be a good way for them toget kind of get started.
I think it maybe doesn't reallyhelp experienced photographers
who already know what they wantto do and know how to use the
controls to do it.
But for those who are justthey're more like comfortable
describing the intended outcome,but don't really know yet how to
(55:02):
make it work tour-wise, thismight be an interesting avenue
for them.
SPEAKER_05 (55:07):
That that is so
cool.
Um looking at all your images onyour own website here, is is
there one story in inpost-production, you know, one
image that says this one I'mreally proud of, or this one, no
matter what I try, I see I can'tget to the way I really like it.
SPEAKER_04 (55:22):
Well, Scott, you
know, the interesting thing
about working on all the edittools in Lightroom, this is
going to sound very strange, butI have this weird pattern where
once I've worked on a tool for along time, I actually try not to
use it in my photos.
I don't know whether this is areactionary thing.
SPEAKER_05 (55:41):
Come on, you you're
using contrast in here.
I can do it.
Yes, this is true.
SPEAKER_04 (55:44):
So the basic stuff,
of course, I use a lot.
But uh the interesting thing, Ithink from almost all the photos
on the page, I use very littleoutside of the basic panel.
Uh serious stuff that's inmasking, uh, of course, that I
use.
But um, I think some of the onesI'm proudest of the ones where I
might have done like a slightcrop and done a little bit of a
(56:05):
contrast adjustment, but it wasalready very close to how I
wanted it to look.
Um, the ones that I have doneand come back to recently are
ones that typically have thesun, they're almost like backlit
scenes.
So like I have an image of uhNepal with uh fishtail mountain
with uh prayer flags in theforeground that like Yeah,
(56:26):
beautiful shot.
And so that's that's aninteresting example of an image
where normally for a landscapeimage, I would you know kind of
set up the image and try toanticipate a little bit and try
to be very considerate with it.
And this is one of those caseswhere I felt like it broke the
rules a little bit in the sensethat you know I was on a trek
and I didn't have my tripod, Iwas trying to go light.
(56:48):
And this was not an image thatwas planned, but I knew the
morning that we were trekking upthat way that, oh, well, the sun
is coming up behind me, and Isaw these prayer flags, like
maybe if I wait like fiveminutes here, uh something, you
know, it will happen to come upbehind the mountain.
Um, and so uh this is like animage that was kind of found on
the spot.
Right.
(57:09):
Um and then in post-processing,at the time, you know, it's very
hard to convey the sense oflight in the picture just
because, you know, one can'treally convey the brightness of
the sun.
But um afterwards, now in thepast few years, with um the
display technology getting thatmuch better, we've been able to
use the newer uh HDR outputfeature in Lightroom, at least
(57:32):
for HDR-capable displays, toreally edit this picture in a
way I think that shows off thefeeling more of being there and
seeing the light shine throughthe prayer flags.
That's right.
SPEAKER_05 (57:43):
Yeah.
I mean, and again, I'm lookingat it right now.
It's such a magnificent shot.
And you mentioned HDR, whichwe're gonna run out of time here
in a second, but I want to telleverybody that you know you've
got um a wonderful blog, youknow, where you talk about
things like HDR, where you talkabout some of the uh elements
that go into post-production ina really clear and and
(58:05):
instructive way.
Yeah, kudos to you, man.
That is absolutely great workout there.
And and I'm I was listening toyou and and you were you know,
chuckling, saying once I work ona you know a tool for so long, I
tend not to use it.
I do absolutely believe that thebetter we get in
post-production, learning what'spossible, that actually makes us
(58:27):
better capturing stuff in camerabecause we know what's possible.
We we know where we're going.
Um and if I can catch it there,that's that's you know, that's
the magic.
That that's the dance that we'reall looking for.
Eric, thank you.
This has been an absolutemagnificent conversation.
I am impressed as all get out,not only with your own
photography, but the work you'redoing for every single one of
(58:49):
us, you know, at Adobe.
And you know, I I mentioned atthe beginning, you know, we've
all seen your name because it'sthere, it's there in that list
of names that goes by real quickat the very beginning.
Thank you very much.
I've enjoyed this.
SPEAKER_04 (59:00):
Thank you so much,
Scott, for having me.
It's been great fun.
SPEAKER_00 (59:06):
Frames.
Because excellent photographybelongs on paper.
Visit us at www.readframes.com.