aRtBee opened this issue on Oct 17, 2011 · 76 posts
bagginsbill posted Tue, 18 October 2011 at 3:22 AM
The purpose is so you see what the renderer is trying to show you. I can't understand how to explain it in words any better. The purpose is so you use all the information consistently and introduce as few errors as possible.
Examples:
Subsurface scattering leaks light through an ear, and decides that despite the fact that no direct light is striking the front of the ear, there should be a small amount of red showing on the unlit side. Assume this part of the ear would otherwise be simply black. Suppose that the amount of red scattering out the ear here should be red 10% of max. Max is, of course, 255 in 8-bit integer (I8) format. In unit format, internal to the render, it is .1. So how should .1 red, or 10% red be sent to the image?
The old way is you assume 10% of 255, so the value stored is 26. What is the luminance of 26? Is it .1? No. Because of gamma, the luminance of this value when you see it on your screen is (26/255)^2.2. This effective luminance in percentage is .63%. This is extremely dark and not even close to the 10% that was intended.
The correct value to emit to the render is .1^(1/2.2) * 255, which when rounded is 89. That is the meaning of the number 89 red - it means "monitor, please shoot 10% of your maximum red here". The number 26 does not mean 10%. It means .63%.
Now suppose you have a texture in which you have, using Photoshop, drawn a certain amount of red. As you made your drawing, you decided to use 89 red, because it looked right to you. Internally, you were making a decision to add more red until it looked right. What you were looking at was 10% red. What we want the renderer to use in its calculations involving this texture is that here it is 10% red. The way to do that is to "decode" the image. The image values are not linear. The value 89 means literally 10%. It does not mean 89/255 or 35% - it does not mean that at all. But if you ignore gamma, that's what you're using.
Now ignoring gamma means that you overestimate colors on input, and you underestimate them on output. The two "errors" seem to cancel out, and in general they do, so long as you stay near the upper third of brightness. If the incoming and outgoing colors are in the same part of the luminance range, then the cancellation is pretty good. If they are in different parts, then the cancellation is pretty bad.
As you get into darker things, where the incoming value is high but the surface is turned 88 degrees to a light source, the errors do not quite cancel out, and in fact become exceedingly obvious.
The render I did of the ball, ground, and 2% mirror shows this clearly. The input is white, which has no overestimation. Why? Because no matter what gamma you're assuming, 1 is 1 and 0 is 0. Maximum and minimum values are not affected by gamma. It's the gradient between them that is affected by gamma. So when you try to calculate 80% light level, at an angle that reduces light source intensity to half (now 40%), and the material only reflects 80% of that (32%) and the mirror only reflects 2% of that (.64%) what is the value that should be written into the file? Why, it is none other than our old friend, # 26. The # 26 represents this particular case, which I carefully used to illustrate in my render.
Renderosity forum reply notifications are wonky. If I read a follow-up in a thread, but I don't myself reply, then notifications no longer happen AT ALL on that thread. So if I seem to be ignoring a question, that's why. (Updated September 23, 2019)