basicwiz opened this issue on Apr 07, 2009 · 88 posts
JoEtzold posted Thu, 09 April 2009 at 2:55 PM
Not to be nitpicking or complaining, but the problem of understanding this lies in the definition of "pixel". People often using this as a rather unsharp definition.
Technically a PIXEL is a dimensionless indivisible thing. Only together with a length dimension it's making sense. Especially as a DPI measurement.. That means "Dot per Inch", there dots are used same as pixel. So 300 DPI give you 300 pixels on one inch or 2400 DPI gives you 2400 pixels on that same inch, so much more dense.
To describe a image map it's useless to do it like it's often done (look into all marketplaces) "with lots of 4000 * 4000 pixel maps". This does say nothing about the map quality cause you don't know how big the map is really. I assume the creator is meaning a image with 4000 DPI * 4000 DPI, in x and y direction. Than it's a hires map.
But 4000 * 4000 pixels could also be on a map of real 10 * 10 inch size and than it's only 400 DPI in each direction.
So with that in mind the shading rate have to be seen according to the DPI of the given shader, i.e. a texture map.
Looking to the document linked by Bagginsbill there is following definition:
Quote - The Shading Rate tells the renderer the frequency at which the shaders should be run on the geometry in your scene. Confusingly, it is expressed as an area in pixels, not as a frequency per pixel, as one might be trapped to guess from its name.
Quote - A Shading Rate of 1 means that no micropolygon's size will exceed the area of one pixel. Imagine how even the simplest objects will generate thousands of micropolygons and render beautifully and without any nasty polygonal silhouette edges. Tip: A good setting for Shading Rate is 1 if you do high-res work.
* meaning hires INPUT !!!
*> Quote - Always keep in mind that Shading Rate determines area respective size of a micropolygon, not its edge length! At a Shading Rate of 9, micropolygons should end up with a maximum edge length of around 3 pixels on screen, namely the square root of 9.
If I interpret that right a shading rate of 1will stay with one pixel. And cause a pixel can't be divided we have to come to a 1:1 conclusion with the DPI rate of the texture.
And so in fact of a shading rate of 0.1 we end with 1/10 of a pixel but the dimensionless pixel can not be divided and so the only way is to increase the DPI rate of the rendering. So instead of 100 DPI output with shading rate 1, we will end at 1000 DPI output with a shading rate of 0.1.
And so indeed a lesser shading rate will highly sharpen the look of the output ... if it could be given by the input texture, for sure.
And the same with sample pixels rate for the antialising. Thinking of 2D imaging antialising will NOT increase the DPI count of the image. It is smoothing the hard edges by averaging the colors and/or greyscale of neighbors. But this is done with a loss of sharpness cause pixel can't be divided. So more antialising means for example to smooth between pixel 1 and pixel 5 and to loose the information of pixels 2,3,4 cause these are now needed as intermediat steps between 1 and 5.
Also with this if the overall DPI of a image is higher, antialising will work better and it seems that you loose less sharpness. But you loose the same 3 pixel content in above example. But they represent less real detail each by each as with less DPI.
You can reproduce this simply with a scanner. Scan a simple straight line with 100 DPI and with 2400 DPI. You may find that first the line will be covered by 10 pixel and second with 240 pixel. So if you antialise the edge of the line by the given 5 pixel, you will loose 2 pixel of the line (and 2 of the background) for that. In total (left and right) you loose 4 of 10 pixel detail of the line.
With the higher DPI you loose 4 of 240 pixel detail.
So with this it's pretty clear that a smaller shading rate will give a sharper image and than also a higher sample pixel will do less unsharpness with smoothing the seams.
How this simple 2D work is produced to corresponding micropolygons (mp) isn't my part of the theory. But I understand that in a manner that with shading rate below 1 the mp's will cover 1:1 to the texture pixels. And with shading rate bigger 1 depending of the quality of the texture map it can happen quickly that 1 pixel of the map is used to cover more than 1 mp. That means smearing and the output will look rather unsharp.
Just my 2 cent ...