ETTR ??

ETTR stands for "Expose To The Right" which means to put the brightest highlights you want to retain at the right edge of the camera's histogram. This started back in the early 2000's when digital cameras had relatively low Dynamic Range (DR), ISO performance, and Tonality (accuracy)... It started with an article by Michael Reichmann where the DR (in stops of light) was correlated to the file bit depth (12bit). And it states that the last stop of DR (the right side of the histogram) contains 50% of the data/details, and every stop below that contains 50% less.

I have several problems with that.

While exposure/DR *is* logarithmic, the last stop does have 50% of the total exposure and every stop below has 50% less exposure, that is not the same thing as data/detail in a digital image. The bit depth of the image file describes how many digits are used to describe a single pixel (it's a bit/pixel specification), current cameras use 12bit or 14bit and that means a total of 4096 tonal values for 12bit and 16,384 for 14bit possible for a pixel. And since sensors use an RGB color scheme we can (erroneously) correlate that to 16,384 values per color. For reference, an 8bit color scheme allows 256 tonalities per color or over 16million colors...more than the human eye can discern... so where's the benefit of greater bit depth files? Well, I'll get to that.
First, let's consider DR/exposure... Take two pixels in a lower stop of DR exposed at 25 and 50 vs two pixels exposed at a higher DR at 256 and 512. Yes the higher DR has higher numbers, but they both have exactly 1 stop between them. And they both have exactly the same number of tonalities between those pixels...one stop. There is no "more detail" available there. And if you remap the 256/512 as 25/50 in post, then there is "definitely" no difference, you have discarded "the data." The "stops" do not matter in this sense.
What matters here is the tonality/accuracy of separation between pixels the camera is capable of. For every current camera (2015) that is less than 9bits (8.73 for the Nikon D810, one of the very best on the market). Significantly less than the 12 or 14bit file depth, but still above the "true color" 8bit depth. Another way to think of it is as 9bits of accuracy "within" each stop of DR.

Next lets consider the color accuracy of current sensors... for the D810 it is 22.4bits. How is that possible with a 12 or 14bit file? Well, as I noted above, it is erroneous to correlate the file bit depth to the color bit depth (that should be apparent now), but it is also erroneous to correlate it to a simple RGB color scheme. That's because a blue photosite is not *only* sensitive to blue light... it is sensitive to a "blue centric wavelength." It may also be sensitive to some violets and greens on either side of blue, but it will be most sensitive to blue. And the same is true for the Red and Green photosites. This is where the "fuzzy math" of demosaicing comes in... Green and red do not combine to make orange, they make yellow (additive light colors). But maybe the red pixel's value is due to a sensitivity to the yellow/orange as well as red, and maybe the adjacent green pixel's value is due to a sensitivity to yellow... If you take orange light, or yellow and red then you get orange... and that is the result in the demosaiced image... a color/luminosity and accuracy beyond what we previously accounted for with the ETTR theory... and a notable benefit to having 12 and 14bit file depths because the camera must record both he color and the luminosity(tonality) as accurately as possible.

And that brings me to the real issue with ETTR... color, luminosity, and demosaicing algorithms  The system has to determine what color to display based upon the electrons collected by the color sensitivity curves/spectrums of the individual pixels... And that math is based upon a "normal" exposure. If you overexpose the scene the light absorbed by a "red" photosite may be determined to be due to red light where that pixel is most sensitive. When it is actually due to orange light where that photosite is less sensitive, and therefore should have collected fewer electrons (but didn't). And if it is determined to be due to red light instead of orange light, the output colors will not only be wrong, but at a lower luminance value as well That's because more electrons collected from red light is the same "exposure" as fewer electrons collected from orange wavelengths for a pixel which is more sensitive to red wavelengths.
Here is a composite of two test images. Both images were taken in raw and processed exactly the same. The only difference is the top has a 2 stop longer exposure and was recovered with -2 exposure in LR. The overexposed image was not blown/clipped in any channel prior to recovery. WB was set with the picker using the #021 squares. And the minimum ISO for that camera (Nikon1) was used. 

As you can see, the colors are not the same... most notable (in this case) in squares like 2/6/7/11/12/16... Those are easily apparent to the eye, but none exactly match if you check with a color meter...and more importantly, you will never get them all to match with any universal correction. If I correct patch 006 as close as possible then other patches (i.e. patch 012) are even worse.
Neither image is a 100% color match... but the normal exposure is *much* closer. E.g., block 12 should be RGB 255/164/26, the ETTR image is 255/207/0, the normal exposure is 254/186/29. The only way you would ever get every color right would be to adjust them all individually. (Part of the issue here is that the lighting used was a high CRI LED panel, but "high CRI" does not mean "perfect color rendering.")

 

 

So, even if all of the theory about ETTR was right (and it's not), you are causing potential color/luminance shifts in your images that cannot be corrected.

And let's go back to DR. Your camera has a certain sensitivity range... many today are in the 14stop range. If your scene fits w/in that DR, then great... there's no problem. And if the scene does not fit then you can only choose which end to sacrifice in a single exposure. There's really no benefit to ETTR as such.

HOWEVER, there is one scenario where there is a benefit to a certain amount of ETTR shift. And that's in very dark images where there is a distinct lack of light in the majority of the image (and when you are already at minimum/base ISO). That's because with a lack of *sufficient information* (electrons collected) you will get color noise due to demosaicing errors which can obscure details. And the severe lack of information makes the darks very intolerant of manipulation in post with a strong tendency for banding and other issues.

 

If you have issues with or questions on any of this, I am more than happy to discuss it in the forums.

 

 

FaceBook  Twitter