inklaire opened this issue on May 23, 2010 · 242 posts
kobaltkween posted Mon, 31 May 2010 at 7:04 PM
Quote - You know, things were so much simpler before computers. I get a photo printed, hand the print to you, and I could expect that the image on the print didn't change during the exchange.
A simple way for me to understand GC and the need for it is with basic math. A pixel in an image is represented by three colors: red, green, and blue. Each color has a value between 0 ( black ) and 255 ( full brightness ) when represented using 8 bits/pixel. The problem is that CRT monitors didn't display the values linearly. If you had a red pixel with a value of 100 and another with a value of 200, you would expect the latter to be twice as bright as the former. This is not the case and while the extremes display close to linear, the midtones display dimmer. GC adjusts the colors, not by brightening the entire image, but by adjusting the different tones by the amount necessary to make them display linearly. As I understand it, LCD monitors do not suffer the same problem but are built to emulate it since everything out already accounts for it.
In a linear workflow, the texture maps need to be uncorrected because if you do a final correction during the render, those images will have been corrected twice giving it a washed out appearance.
did you read any of my posts at all? i ask because i quoted the Wikipedia entry directly dealing with this.
no, CRTs do not natively support the sRGB spec. they are however altered to support it. LCDs do natively support it. as do cameras and scanners. digital images never get "corrected," they are created in sRGB color space to begin with by either cameras or scanners or you (by way of image creation software). they need to be linearized because the renderer can't make it's calculations properly with non-linear input. it wouldn't matter if they used a totally different color space than sRGB. they would still need to be linearized before the renderer made its calculations. it has nothing to do with the final correction. the final correction is an issue only if you are viewing them on a screen or printing them on a printer calibrated to sRGB space.
you can think of it as two entirely separate procedures.
the renderer speaks one language. you need to make sure that everything it gets is in that language. if you know what language something is in, you can translate it to the renderer's language. digital images and colors that display on our monitors are in sRGB space, and we use linearization equations to translate them into the renderer's language.
your monitor speaks a second language. you need to make sure everything it gets is in that language. if you know what language you're giving it, you can translate that. if you just get a digital image, hey, it doesn't need translation. it's already in the right language. if you give it something in linear space, like your renderer's final output (after all calculations, including IDL are done), then it will garble it like someone who speaks English being handed German. you need to translate it into sRGB, which is what the monitor speaks.
that final, corrected image is just like a digital photo in that it's in sRGB space. just like you can (and most photographers do) edit your photo after it's taken, you can edit your render.