bagginsbill opened this issue on Feb 01, 2009 · 207 posts
bagginsbill posted Sun, 12 April 2009 at 8:28 PM
Quote - am I to assume that the Color_Pow() function somehow converts the image to linear space and then does the gamma math to it?
Ummm. Gosh this is so difficult. I've seen this so many times in the last month.
I'm stumped and don't know how to answer, because the two-part question is actually two different questions, and the phrasing and coupling of these two parts either indicates a contradiction, or represents a severe misunderstanding. Meaning, if I answer your question, I will confuse some more.
So let me ignore your questions and re-state what I've said (and all the other links say).
Images you're getting from cameras are designed to be seen on a computer, using the sRGB standard.
Images coming out of your render should be designed to also be seen on a computer, using the sRGB standard.
The sRGB standard is not a linear representation of luminance. In sRGB, it is incorrect to assume that doubling the value of a pixel should cause it to be twice as bright.
The sRGB color space is not quite exactly a simple power relationship. But it is very close to that, and so we can work with Gamma Correction (GC) as our working model. Until you are comfortable with GC, it would only serve to confuse you to talk about the true 100% accurate specification of the sRGB space. (It requires considerably more math.) While straight GC is slightly wrong, it is 1000 times more wrong to do nothing at all.
Lighting calculations should be in linear space. It is correct when calculating a material's response to light that doubling the value of a pixel should represent twice the illumination. This is the fundamental principle of linear math - that x + x = 2x. It is the only kind of math that most people ever encounter. There are other kinds, and they are exceedingly difficult to manage. Anybody wanting to do any accurate shader/lighting work must do it linearly.
Therefore, when using images as data, we must convert them from sRGB to linear values. I often call this anti-GC, because the image you have was gamma corrected in order to end up non-linear. To bring it back to linear, you must do the opposite of that gamma correction, so want to anti-GC the image. The math is:
image ** 2.2
There's really no point in discussing any other gamma value, because any image you have has almost certainly been gamma corrected by the factor 2.2, whether it was hand-drawn or photographed. All discussion about the 1.8 gamm value for Mac has pretty much stopped - Apple has seen the futility of having two display standards in the modern Internet world where we all must share content.
Then we must calculate reflections (whether specular or diffuse) and refractions using the formulas that describe the universe. These formulas are all built on the fundamental premise that for any value x, whatever luminance that represents, 2x is twice that. This is necessary for things to work out conveniently, such as the simple fact that the effective diffuse reflection of a specific amount of illumination depends on angle of incidence, specifically on the cosine of the angle of incidence. Further that if I double the light intensity, the diffuse reflection also doubles.
Once all reflection/refraction modeling is complete, we end up with a linear color value after adding some things together. Before placing this into the image, this must be converted to the sRGB color space. We cheat, and use gamma correction. The formula is:
y ** (1 / 2.2)
Where y is the rendered color.
Renderosity forum reply notifications are wonky. If I read a follow-up in a thread, but I don't myself reply, then notifications no longer happen AT ALL on that thread. So if I seem to be ignoring a question, that's why. (Updated September 23, 2019)