incantrix opened this issue on Oct 24, 2010 · 55 posts
AnAardvark posted Wed, 24 November 2010 at 1:03 PM
Quote - You make valid points that must be thought about, but the essence of render GC has nothing to do with monitor calibration or compensation for device variance.
It's much simpler than that.
8-bit per channel digital images encode colors using very few bits - in fact just 8. This is a serious limitation. (One that does not apply to HDR or EXR file formats.)
With only 256 levels possible, a linear mapping of the numerical values to luminance would result in unnecessary detail at one end and severe loss of detail at the other.
Therefore the industry settled on the sRGB standard. This is, as you know, a non-linear mapping.
For the most part, all pictures, whether from camera, or renderer, or scanner, or hand-drawn in Photoshop, are encoded this way.
So, in a way, sRGB is very much like Dolby. (In audio recording the basic idea of Dolby is to compress the dynamic range on recording and expand it back on playback. For example, a symphony orchestra has about 70dB, but standard Chromium Oxide cassette tapes (which dates me) had about a 55dB range between tape hiss on the quiet end, and distortion on the louder end. Dolby C compresses the dynamic range by about 15dB on recording, and expands it back on playback. Most of the compression occurs at higher frequencies (since most of the noise on tape is moderately high frequency), and low amplitudes.