Forum Coordinators: RedPhantom
Poser - OFFICIAL F.A.Q (Last Updated: 2024 Nov 10 10:34 am)
Quote - > Quote -
If you are interested, take a look at this fascinating -certainly in the context of this discussion- article from a few years ago, the "White's Illusion" that plagues the human brain can also occur in "any system that tries to emulate human vision." (quote from the article, emphasis mine)
So what does it say about me that "White's Illusion" looks like two equally grey boxes to me. I have to really do some concentrating to get the one on the left to look darker.
The first paragraph says "Like most people, you should see one block of grey as darker than the other," it just means you're not like most people, as far as your vision goes, your vision is more accurate the the physical world than most people's. Everybody is different one way or another, and even people with very "correct" vision will sense colors slightly differently from each other.
Too late to edit my post, but in fact, the boxes on the right should be darker, at least if you see how I do.
I made my own White's illusion. It's the same as in the article, only squeezed down. Squeezing the boxes down exaggerates the effect (see if they still look the same to you), and stretching them bigger will decrease the effect of the illusion.
You know, things were so much simpler before computers. I get a photo printed, hand the print to you, and I could expect that the image on the print didn't change during the exchange.
A simple way for me to understand GC and the need for it is with basic math. A pixel in an image is represented by three colors: red, green, and blue. Each color has a value between 0 ( black ) and 255 ( full brightness ) when represented using 8 bits/pixel. The problem is that CRT monitors didn't display the values linearly. If you had a red pixel with a value of 100 and another with a value of 200, you would expect the latter to be twice as bright as the former. This is not the case and while the extremes display close to linear, the midtones display dimmer. GC adjusts the colors, not by brightening the entire image, but by adjusting the different tones by the amount necessary to make them display linearly. As I understand it, LCD monitors do not suffer the same problem but are built to emulate it since everything out already accounts for it.
In a linear workflow, the texture maps need to be uncorrected because if you do a final correction during the render, those images will have been corrected twice giving it a washed out appearance.
As for my own renders, I use GC and also try to restrict myself to three lights in a scene.
"if you do a final correction during the render, those images will have been corrected twice giving it a washed out appearance."
I'm thinking that is my problem. I'm probably using textures that have been made with corrected texture maps. although saying that, vue 8.5 uses GC and the same scene with GC in vue looks richer and clearer then the poser render with GC of the same thing.
Of course the lighting is the difference I suppose.
Love esther
I aim to update it about once a month. Oh, and it's free!
I suppose in poser the trick is going to be recognizing that washed out look and if you get it in a render, try turning GC off.
Love esther
I aim to update it about once a month. Oh, and it's free!
I think I found out what's "wrong" with Poserpro gamma correction and dynamic hair. Again I did some experiments today and I came to this conclusion (Please, correct me if I'm wrong because I think this isn't a simple problem): Gamma correction is meant to work with texture maps, to make the renderer work with lineair gamma decoded maps, and if I'm right it actually lowers the midtones of a texturemap before it's send to the renderer, and the renderer boost the midtones of the final picture to get it properly gamma corrected.
But dynamic hair doesn't use texturemaps, but a hair-lighting node. So in fact the gamma correction isn't needed for a hair node, and you have to decode (lower the midtones of) the hair-node if your using poserpro GC.
This can be done the same way as BB does for his VSS-skinshader, by plugging in a power colormath node with a single add mathnode of 2.2 to value2, after the hair node.
You also have to reduce the translucency value, which a lot of dynamic hair shaders use.
If this is not clear, I can make a screenshot.
Best regards,
Bopper.
-How can you improve things when you don't make mistakes?
Quote - You know, things were so much simpler before computers. I get a photo printed, hand the print to you, and I could expect that the image on the print didn't change during the exchange.
A simple way for me to understand GC and the need for it is with basic math. A pixel in an image is represented by three colors: red, green, and blue. Each color has a value between 0 ( black ) and 255 ( full brightness ) when represented using 8 bits/pixel. The problem is that CRT monitors didn't display the values linearly. If you had a red pixel with a value of 100 and another with a value of 200, you would expect the latter to be twice as bright as the former. This is not the case and while the extremes display close to linear, the midtones display dimmer. GC adjusts the colors, not by brightening the entire image, but by adjusting the different tones by the amount necessary to make them display linearly. As I understand it, LCD monitors do not suffer the same problem but are built to emulate it since everything out already accounts for it.
In a linear workflow, the texture maps need to be uncorrected because if you do a final correction during the render, those images will have been corrected twice giving it a washed out appearance.
did you read any of my posts at all? i ask because i quoted the Wikipedia entry directly dealing with this.
no, CRTs do not natively support the sRGB spec. they are however altered to support it. LCDs do natively support it. as do cameras and scanners. digital images never get "corrected," they are created in sRGB color space to begin with by either cameras or scanners or you (by way of image creation software). they need to be linearized because the renderer can't make it's calculations properly with non-linear input. it wouldn't matter if they used a totally different color space than sRGB. they would still need to be linearized before the renderer made its calculations. it has nothing to do with the final correction. the final correction is an issue only if you are viewing them on a screen or printing them on a printer calibrated to sRGB space.
you can think of it as two entirely separate procedures.
the renderer speaks one language. you need to make sure that everything it gets is in that language. if you know what language something is in, you can translate it to the renderer's language. digital images and colors that display on our monitors are in sRGB space, and we use linearization equations to translate them into the renderer's language.
your monitor speaks a second language. you need to make sure everything it gets is in that language. if you know what language you're giving it, you can translate that. if you just get a digital image, hey, it doesn't need translation. it's already in the right language. if you give it something in linear space, like your renderer's final output (after all calculations, including IDL are done), then it will garble it like someone who speaks English being handed German. you need to translate it into sRGB, which is what the monitor speaks.
that final, corrected image is just like a digital photo in that it's in sRGB space. just like you can (and most photographers do) edit your photo after it's taken, you can edit your render.
Quote - > Quote - > Quote -
If you are interested, take a look at this fascinating -certainly in the context of this discussion- article from a few years ago, the "White's Illusion" that plagues the human brain can also occur in "any system that tries to emulate human vision." (quote from the article, emphasis mine)
So what does it say about me that "White's Illusion" looks like two equally grey boxes to me. I have to really do some concentrating to get the one on the left to look darker.
Too late to edit my post, but in fact, the boxes on the right should be darker, at least if you see how I do.
I made my own White's illusion. It's the same as in the article, only squeezed down. Squeezing the boxes down exaggerates the effect (see if they still look the same to you), and stretching them bigger will decrease the effect of the illusion.
The one on the right looks darker in your example but the example they showed actually had three vertical strips. I was looking at it wrong. I was comparing the grey strip on the far left to the one on the far right. The one in the middle does look lighter.
Quote - I think I found out what's "wrong" with Poserpro gamma correction and dynamic hair. Again I did some experiments today and I came to this conclusion (Please, correct me if I'm wrong because I think this isn't a simple problem): Gamma correction is meant to work with texture maps, to make the renderer work with lineair gamma decoded maps, and if I'm right it actually lowers the midtones of a texturemap before it's send to the renderer, and the renderer boost the midtones of the final picture to get it properly gamma corrected.
But dynamic hair doesn't use texturemaps, but a hair-lighting node. So in fact the gamma correction isn't needed for a hair node, and you have to decode (lower the midtones of) the hair-node if your using poserpro GC.
This can be done the same way as BB does for his VSS-skinshader, by plugging in a power colormath node with a single add mathnode of 2.2 to value2, after the hair node.
You also have to reduce the translucency value, which a lot of dynamic hair shaders use.
If this is not clear, I can make a screenshot.Best regards,
Bopper.
no, you're wrong. as i posted previously, gamma correction applies to all color input, and the whole range of colors. the only parts of the range that are not transformed are 1 and 0 for each color. this is why bagginsbill's many, many procedural shaders without material based GC work in Poser Pro. it is also, i suspect, why carodan's dynamic hair looks so good in Poser Pro. i don't work with dynamic hair nor do i use Poser Pro, so i can't talk about that specific node in Poser Pro, but i can tell you that your assumptions are way off.
my suggestion is to do material based correction (careful - if you're using IDL, you need to correct after IDL, which means correcting the final image, not the material). then compare it to application based correction. that way you can filter out what you expect from what it should be.
"no, CRTs do not natively support the sRGB spec. they are however altered to support it. LCDs do natively support it. as do cameras and scanners."
What I was referring to was the hardware aspect. In a CRT, increasing the voltage to the electron gun does not result in a linear increase in luminance. So a channel value of 128 ( for 8bit channels ) does not display as half-brightness, but rather as something slightly dimmer. Because of the natural physics of the electronics, the CRT is acting as a 'gamma decoder', as pointed out in the Wiki page. In order for the image to display the proper luminance values, the linear image needs to be 'gamma encoded'. An LCD does not naturally exhibit this behavior, so the electronics emulate it. On many LCDs, the gamma decoding value can be set.
As for colorspaces, here's a page that has a nice explanation of the relationship with sRGB and gamma.
*it is also, i suspect, why carodan's dynamic hair looks so good in Poser Pro.
*I read the whole thread about dynamic hair again and GC isn't mentioned at all.
*no, you're wrong. as i posted previously, gamma correction applies to all color input, and the whole range of colors. the only parts of the range that are not transformed are 1 and 0 for each color.
*I did some renders this evening with a white ball, (color 1) without any textures, with a single spotlight and IBL . Poserpro GC makes the diffuse shadows and highlights on the ball very hard, and I don't like that, but perhaps I'm doing something wrong.
I have to go to bed, I see what I can do tomorrow.
Best regards and goodnight,
Bopper.
-How can you improve things when you don't make mistakes?
bopper,
A "white" ball is still affected by GC at the end, unless it entirely comes out exactly white, i.e. the final rendered pixel is equal to 1.
You seem to be making some point by informing us that it was a white ball.
If you were to change that to a 50% gray ball (in linear color space) but you also increased your light levels exactly in compensation to the decrease in reflectance (to twice what you're using now), the rendered ball would look the same. Is that clear or no?
Shaders are doing a lot more than spitting out the color you put on the ball.
Renderosity forum reply notifications are wonky. If I read a follow-up in a thread, but I don't myself reply, then notifications no longer happen AT ALL on that thread. So if I seem to be ignoring a question, that's why. (Updated September 23, 2019)
Quote - What I was referring to was the hardware aspect. In a CRT, increasing the voltage to the electron gun does not result in a linear increase in luminance. So a channel value of 128 ( for 8bit channels ) does not display as half-brightness, but rather as something slightly dimmer. Because of the natural physics of the electronics, the CRT is acting as a 'gamma decoder', as pointed out in the Wiki page. In order for the image to display the proper luminance values, the linear image needs to be 'gamma encoded'. An LCD does not naturally exhibit this behavior, so the electronics emulate it. On many LCDs, the gamma decoding value can be set.
As for colorspaces, here's a page that has a nice explanation of the relationship with sRGB and gamma.
actually, i find that page obscures a lot of important points and confuses others. i find the Wikipedia page much clearer and bettter for actually implementing sRGB transformations.
Quote - I read the whole thread about dynamic hair again and GC isn't mentioned at all.
read some his posts on GC and IDL. he uses dynamic hair and IDL in everything i know about now. his Miki 3 previews use dynamic hair, as do many of his images in GC discussion threads. i don't know about his dynamic hair discussions. but since every image i've seen him post has seemed to use the same workflow, i'm pretty sure he's just, you know, using GC. you only have to talk about it when people bring it up.
Quote - Poserpro GC makes the diffuse shadows and highlights on the ball very hard, and I don't like that, but perhaps I'm doing something wrong.
first of all, bagginsbill and carodan have posted ways of blurring the terminator. it involves using SmoothStep to blur the shading. i'm pretty sure i don't use that technique well, so i'll just suggest you look it up. second of all, since i don't know anything about your lighting, i'll mention once again that sRGB equations are a little more accurate than GC. this means that GC is flatter and less accurate in dark areas, which affects surface shading, cast shadows and dark diffuse colors. third, if you're working on an indoor scene, you might want to make sure your lights have some falloff. i realize you might already be doing this, but i thought i'd mention it just in case.
Quote - I did some renders this evening with a white ball, (color 1) without any textures, with a single spotlight and IBL . Poserpro GC makes the diffuse shadows and highlights on the ball very hard, and I don't like that, but perhaps I'm doing something wrong.
You're doing nothing wrong. This is how real light behaves. This is a real life sphere with a point light, and the line between light and shadow is very harsh. It only looks odd to us because we rarely see any object lit from a single direction, there is almost always indirect light in addition.
There actually is a bit of indirect lighting in that case, as the sun is shining on half of the side of the Earth facing the Moon, which is why the New Moon is visible:
www.dewbow.co.uk/glows/eshine1.html
I couldn't find a good crisp picture of the phases of Venus, which would not have that issue...
----------------------------------------------------------------------------------------
The Wisdom of bagginsbill:
"Oh - the manual says that? I have never read the manual - this must be why."*You're doing nothing wrong. This is how real light behaves. This is a real life sphere with a point light, and the line between light and shadow is very harsh. It only looks odd to us because we rarely see any object lit from a single direction, there is almost always indirect light in addition.
*I did some testrenders again today and came to same conclusion, funny enough the moon was also the first thing that came to my mind. There are lot of things that are clear to me now. If I want to work with GC I have to lower my light levels, BB was absolutely right about that and he deserves credit for that and all his investigations, I'm just not a person who is easily convinced and have to try out things before I can accept it, but what's wrong with double checking?
Just switching on GC in poserpro isn't a solution to make your renders better. You also have to change the lighting and the shaders. In combination with GC, IBL is almost a must to use. It just enlightens the dark side of the moon
I'm afraid my dynamic hair shader approach wasn't a solution at all so I have reïnvestigate the hairshader how I can make it work with Gamma correction. So I won't post my materialroomsettings, because it doesn't fix the problem. (excuse for that)
If anyone is interested I can post my GC testrenders.
Best regards,
Bopper.
-How can you improve things when you don't make mistakes?
This is about accuracy. Accuracy is of interest when attempting to do something realistic. Realism is of interest when that's what you want. Please don't talk to me about whether art requires realism or accuracy. It doesn't, and I neither demand realism and accuracy nor care if you don't want to seek it. Do your own thing. Despite what some have said, I do not demand anything of your art.
But there are people who seek realism. Maybe they need to do a commercial showing a car ad, without an actual car. Maybe they just like making fantasy "photos" that cannot be done in real life. Doesn't matter why. The point is, if you're trying to show accurate lighting, and keep it simple instead of making it hard, then I try to help people understand how to do that. For people who ask me how to get more realism, I talk about GC. It's the simplest first step to an outcome that is "less wrong". Even if perfection is never achieved, less wrong is better than more wrong. By wrong I mean in the sense of physics, not art.
I also help with toon shaders, Vargas airbursh effects, and other forms of art. In those cases, I do not talk about GC.
This image done with:
PPro 2010, one spot light with inverse square falloff and IDL, but no GC.
Renderosity forum reply notifications are wonky. If I read a follow-up in a thread, but I don't myself reply, then notifications no longer happen AT ALL on that thread. So if I seem to be ignoring a question, that's why. (Updated September 23, 2019)
I took the first image, which was not gamma corrected, into Photoshop.
Using Photoshop Levels, I adjusted the middle value from 1, to 2.2. This is the same as gamma correction, except that the incoming step of anti-gamma correction was not performed. So I had to also increase saturation, because the full linear workflow was not followed.
Notice the banding. This is because information was lost. The darker shades on the wall were recorded at very low levels in the original image. Postwork gamma correction can only adjust those individual values to their corresponding levels-adjusted values. The in-between values are not there in this version, because they did not exist in the data stored in the uncorrected image.
This is why postwork levels adjustment falls short. You are starting with less information, and the info you have is less accurate. You cannot fix this in postwork.
Notice also that the first (darkest) figure is still pretty much black. That's because in the uncorrected image, most of him was less than 1, i.e. 0. You cannot adjust levels around 0. There is no data. 0 to any power is still 0.
If you really want to do post-work levels adjustment in a scene with dark areas accurately, you must store the image in HDR or EXR format. Then the data is more than 8-bits and the low level detail can be recovered.
But from a JPEG or PNG, the data is lost forever.
Renderosity forum reply notifications are wonky. If I read a follow-up in a thread, but I don't myself reply, then notifications no longer happen AT ALL on that thread. So if I seem to be ignoring a question, that's why. (Updated September 23, 2019)
I understand what you're saying. The problem is that I'm visually used to certain effects in poser without GC, like the soft diffuse shadows on a ball. I now understand that that is actually wrong, like Stewer showed with his moonpicture and I discovered also this afternoon. ( talking about an AHA-erlebnis)
But shadows like that only exist in outerspace or in a black-velvet clothed room, so we always need IBL with ambient occlusion or use IDL in combination with GC, to get softer and realistic shadows.
I still have to work on the dynamic hair shaders,but I'm making some progress.
Using inverse square falloff is also a good tip, I forgot about that one.
Best regards,
Bopper.
-How can you improve things when you don't make mistakes?
I chose that scenario, only because it would demonstrate the banding problem and the zero-data problem.
But much can be learned from studying that setup.
Here is the same, rendered with GC, but without IDL. Lacking the IDL, this is indeed very unrealistic. Compare this with my second render above.
Renderosity forum reply notifications are wonky. If I read a follow-up in a thread, but I don't myself reply, then notifications no longer happen AT ALL on that thread. So if I seem to be ignoring a question, that's why. (Updated September 23, 2019)
Instead of IDL, I used IBL to produce some sort of secondary lighting. But the big problem with IBL is it is uniform through the whole scene. It is incapable of capturing the fact that the amount of ambient light varies across the scene. When you use IBL, the secondary light is the same throughout the entire universe.
IDL does not make that error. It varies the amount of secondary light according to how much bounced primary light is nearby. Thus, accuracy is increased.
The IBL image is not terrible, but compared to the IDL image, it is more wrong. Very much more wrong.
Renderosity forum reply notifications are wonky. If I read a follow-up in a thread, but I don't myself reply, then notifications no longer happen AT ALL on that thread. So if I seem to be ignoring a question, that's why. (Updated September 23, 2019)
I think I have learn again how to set up my lighting for my renders. I found out that lower levels of light work better with GC and IDL. I believe you have said that ages ago, but now I'm starting to see why. In fact I have to gamma-correct my brain to see how GC works.
It's a new challenge but that keeps the life interesting.
Bopper.
I missed your other post while typing, but you're right. I think it's one of the best new features of poser 8 and poserpro 2010
-How can you improve things when you don't make mistakes?
Realism isn't just for naked Vickies. Have a look at this image by tate.
http://www.renderosity.com/mod/gallery/index.php?image_id=2061041
These are unrealistic cartoon characters, shapes that can never occur in real life. So what. It's a fantastic image, and the realistic lighting and GC materials are used to very good effect in what is otherwise a toon image.
I'm almost certain that tate doesn't know any of the math. But he's an artist with vision and tools to execute that vision.
Renderosity forum reply notifications are wonky. If I read a follow-up in a thread, but I don't myself reply, then notifications no longer happen AT ALL on that thread. So if I seem to be ignoring a question, that's why. (Updated September 23, 2019)
Bill makes a good point. Just like you can add toons or an artistic style to a "real" character to get something totally different, you can add a "real" shader to a toon character to get something completely new. It's about matching the mats to your vision, whatever that is. If GC can help it go for it. If not, then don't use it.
BTW the GCed one with robots and IDL looks the best to me.
WARK!
Thus Spoketh Winterclaw: a blog about a Winterclaw who speaks from time to time.
(using Poser Pro 2014 SR3, on 64 bit Win 7, poser units are inches.)
Quote - > Quote - I did some renders this evening with a white ball, (color 1) without any textures, with a single spotlight and IBL . Poserpro GC makes the diffuse shadows and highlights on the ball very hard, and I don't like that, but perhaps I'm doing something wrong.
You're doing nothing wrong. This is how real light behaves. This is a real life sphere with a point light, and the line between light and shadow is very harsh. It only looks odd to us because we rarely see any object lit from a single direction, there is almost always indirect light in addition.
This falls into what TVTropes called "The Coconut Effect". People are so used to not seeing that harsh line between really bright and black on images, TV, and film, where the lighting is arranged, manipulated, or rendered so as to show the entire object, that when they see a real image it looks wrong to them, even though it's actually right.
I'm just breezing in briefly (the man got me slaving at making paintings ATM).
KobaltKween's quite right - I use Poser pro GC pretty much exclusively now. I've found (as bb's examples suggest) that the output using GC gives a much better tonal balance to renders and that any postwork I do involving exposure/levels adjustments tends to be far more minimal and the results far less likely to result in ugly banding artifacts (a problem I frequently had before GC).
I have to admit though that I haven't quite figured out the best approach to rendering dynamic hair with IDL and GC. I thought I had a solid method but ran into problems again - I was just starting to do some new tests that I was posting in the aforementioned dynamic hair thread before I got distracted with commercial jobs. My best results to date with dynamic hair and IDL have been using HSVExponential Tone Mapping instead of GC, but I'm sure we'll get it licked before too long with GC. There's mainly an issue with some of the fine detail but I'm pretty sure it's a shader issue with the hair. When I get some free time I'll be doing more experiments.
PoserPro2014(Sr4), Win7 x64, display units set to inches.
www.danielroseartnew.weebly.com
Quote - not hesitating to correct you ;D.
Wikipedia
Quote - Many JPEG files embed an ICC color profile (color space). Commonly used color profiles include sRGB and Adobe RGB. Because these color spaces use a non-linear transformation, the dynamic range of an 8-bit JPEG file is about 11 stops. However, many applications are not able to deal with JPEG color profiles and simply ignore them.
afaik, Poser does not support reading color profiles. that's not what makes you need to linearize your texture. if the images are created by a digital camera or scanner, those devices use sRGB space. if you paint a texture, you see it through an sRGB device, so in that sense, the monitor will affect your image. but not in the sense you seem to mean.
...
the issue is more that if it looks right on your screen, the renderer can't work with it.
in my experience, the difference between Mac and PC was easier to handle in post than the difference between GC equations and sRGB equations. or actually, pretty much ignore and just use the same principles of making images that work in general on the web. just like the sRGB printers and other digital devices do, in my expeience it's best to work to the specification. at least for the raw render.
Kobaltkween, thank you so much for correcting me. I see that I will need to do a lot more research into the fascinating topic of gamma correction on the web. If you happen to have more favorite web links on this topic, please post them.
Quote - Hum
[http://http.developer.nvidia.com/GPUGems3/gpugems3_ch24.html"For example, by convention, all JPEG files are precorrected for a gamma of 2.2."!!!!
:cursing:
In pp, cheked gamma correction =gamma correction of precorrected gamma texture= gamma errorSolution:not to save in jpeg format, using a format nondestructive and no gamma precorrected ex. png,. tif; hdr, and other
Or, use gamma inverse node in materials roomSorry for my bad english
Ps:Jpeg=inadapted format for the 3d applications
](http://http.developer.nvidia.com/GPUGems3/gpugems3_ch24.html)
Kalrua, thanks muchly for the link. Gamma correction is explained very clearly in this article. I also love the witty title "The Importance of Being Linear". Oscar Wilde has always been my favorite playwright. I simply cannot resist laughing each time I read or watch his famous play "The Importance of Being Earnest".
Quote - This is my first venture with GC and VSS but don't know if i have used them properly - I shall rely on Robyns guidance. So far though it seems a case of horses for courses. I am best pleased with No.6 because it played down a flaw in the face map - right nostril not properly aligned. However, the lighting effect has been reduced.
Apple_UK, Sorry I cannot see any details because the images look really small, almost like thumbnails, on my monitor. Would it be too much trouble for you to re-render them a little larger, maybe 700 pixels in height?
Quote - > Quote - We now return you to your regularly scheduled broadcast.
Ah, the famous Apple Computer commercial back in 1984 !!! A very futuristic commercial that sends the viewers a very powerful message.
I'm sorry to say, hawarren, that the quote comes from the closing credits of the old Outer Limits TV show.
Oops. I meant the image that came with the text posted by Lmckenzie on page 7 (shown above). Unfortunately when I clicked "quote", only the text was included. This image was a snapshot of the Apple Computer commercial back in 1984. You are correct that the quote came from the closing credits of the old Outer Limits TV show (one of my favorites). Lmckenzie combined the Apple Computer commercial with the Outer Limits quote. This is really COOL !!!
I must confess that my reference was to the (old?) standard network line when returning from some special bulletin or other anomalous event. I only remember the beginning of TOL with 'we will control the vertical..' etc. What the heck, I'll take credit for a third pop culture reference anyway :-)
"Democracy is a pathetic belief in the collective wisdom of individual ignorance." - H. L. Mencken
This site uses cookies to deliver the best experience. Our own cookies make user accounts and other features possible. Third-party cookies are used to display relevant ads and to analyze how Renderosity is used. By using our site, you acknowledge that you have read and understood our Terms of Service, including our Cookie Policy and our Privacy Policy.
Oh dear.
This thread has become so full of confusion, misinformation, side issues and misinterpretations that the point of the original question - "What's the big deal with gamma correction?" - has been lost.
This article - not about a 3D app - may help you find the answer. Follow the links in the article, there's some fascinating stuff in there.
Here's a quote from the end of the article:
"On one hand, we could implement a fully linear workflow that would.... deal with bitmap data in a more physically correct manner and deliver better results... On the other hand, there is a deeply ingrained, decades-old workflow that almost everyone in the industry is used to..."
Sounds familiar!
Folks, awareness of linearized workflows is fairly new. They are only just being enabled in the 'Big' 3D apps. It's all a bit new and takes getting used to. You don't have to adopt such a workflow. It's optional. But it is, in my opinion, better, and well worth negotiating the learning curve that goes with it.
Windows 10 x64 Pro - Intel Xeon E5450 @ 3.00GHz (x2)
PoserPro 11 - Units: Metres
Adobe CC 2017