Forum: Poser - OFFICIAL


Subject: Sweaty Skin: Alternate Specular - Glossy without Bump?

Ricky_Java opened this issue on Feb 21, 2008 · 18 posts


bagginsbill posted Tue, 26 February 2008 at 9:16 AM

RJ:

Each numeric parameter on a node (or the PoserSurface) is based on some underlying unit. These are not documented well (if at all) so it took me quite a bit of experimentation to discover what they are.

This is a pretty important topic that I've never written about. It's worth understanding.

First we need to talk about the three coordinate systems we can work with in shaders and their corresponding direction (such as surface normals) and distance calculations.

In shaders, we can directly access some of these coordinates and related measures.

UV space- Variables/u (aka u_Texture_Coordinate) - the U value

**World space
**- Variables/P - the XYZ coordinates of the point being shaded.

Combinations- dPdu and dPdv - the rate of change of the XYZ with respect to U or V. (As implemented by Poser for polygonal meshes, not particularly useful. It isn't continuous and smooth like you'd want it to be.)

It would be great if we had a few more. For example, I'd love to have the exact direction vector to the camera and to specific lights. We can get at some of this information indirectly, through trickery. For example, the Edge_Blend node gives us an indication of the alignment between N and the direction to the camera. And the Specular node gives some indication of whether N points at a light or not. These are not completely or exactly what you want or need at times, but they are very useful nonetheless. For example, if you look at the shader I posted above, you'll see I used a Specular node to drive a Blender. The Blender is choosing two different tinted versions of the skin texture. When the skin points directly at a light, it uses the blue tint. When pointed away from a light, it uses the red tint. This is my favorite hack for giving human skin an appearance that approximates subsurface scattering.

So what is the point of all this knowledge? Well, first of all, it will help you to know what you're saying on certain parameters. For example, the Bump or Displacement amount is actually a World space distance, and is expressed in units matching your current choice for Display Units. I always use Inches, as I find it convenient to think in those terms. Another World space distance is found on Reflect and Refract - the RayBias. I won't get into why Poser needs this silly parameter as it is a painful subject to me and I get quite cranky about it. Suffice to say that if you change your Display Units, you'll see this number change! (at least in Poser 7 it does). Through experimentation, I am fairly certain that such World space distances are always written into the shader file in inches, regardless of how you entered them. This is important information for any software that writes shaders directly into files, such as my matmatic program.

Did you notice that none of the "Variable" nodes give us Model space xyz values? This is very unfortunate, as I've needed this information at times. But it would be wrong to think they're not used. They are used all over the place!!!

All of the nodes listed in "3D Textures" use xyz coordinates. When you set "x scale", for example, you are defining a divider to be used with the "x" coordinate. A bigger "x scale" causes the 3d texture to spread out in the x direction, by making the rate of change of x be lower. Similarly, "y scale" and "z scale" are also dividers for the corresponding Model space coordinates. By altering these scales, you alter the evolution of the 3d texture pattern in Model space.

Now it would be logical and consistent if Model space scale parameters were expressed using your preferred Display Unit, in the same way that RayBias or Displacement or Bump changes to meet your desire for mental math convenience. Well, they don't! Poser never shows these values in anything but their natural underlying Poser Native Units (PNU). That is why, RJ, you observed that the Turbulence node parameters were unaffected by changing Display Units.

You also observed that the Glossy parameters didn't change. But Glossy doesn't have any parameters relating to coordinates directly. It has Roughness and Sharpness, of course, but those have to do with angles and not coordinates or distances. If we had a preference setting for how angles are entered and displayed, for example radians versus degrees, then they would have to change, but we don't.

Now there are some nodes that have an interesting checkbox, called "Global_coordinates". For example, Cellular and Spots have this checkbox. Can you guess what these do? Stop reading and take a guess...

Yep - they change the node so that instead of using Model space xyz coordinates to drive the pattern, they use World space XYZ coordinates. Of what use is this? Consider what happens when you scale an object, such as the ground plane. The xyz coordinates spread out, right? So that means if you use a pattern on the ground, and you scale it up, you also scale up the pattern. This is generally not desirable for the ground. Suppose you're using a node to simulate waves on water, applied to bump or displacement. If you need more water, you want the prop to be bigger, but the waves should stay the same size. So you'd want to use World space coordinates, so the pattern's scale, with respect to the world, stays the same, even if the ground covers more of the world.

But my favorite nodes, Fractal_Sum and Turbulence, do not have this checkbox. What to do!!?!?

Well I discovered another bit of magic, an undocumented behavior that at first was very peculiar and difficult to comprehend, but I now use it a lot. If you plug any kind of node into "x scale", "y scale", or "z scale", those parameters STOP USING Model space x,y, or z values. Instead, they use the node you plugged in!!!

So, for example, I can plug a P node into any of those and use World space X, Y, or Z as I see fit. You have to either use one P node and three Comp nodes (to extract each coordinate) or use three P nodes, each set up to only pull out one coordinate. For example, P(x=1, y=0, z=0) will extract only the x coordinate. Unfortunately, due to how vector to scalar math works in the nodes, this gets divided by 3. So you should use P(x=3, y=0, z=0) instead, which cancels the divide by three.

You can also force one of the 3D texture nodes to be 2D instead! Yes you can. Plug U into "x scale", V into "y scale" and you have UV instead of xy. However, the z is still tracking Model space. To stop it, plug a Math:Add node in with a value of 0. This makes the z value a constant - thus the pattern ignores the Model space z coordinate.

With still more trickery, it is possible to design a set of nodes that causes the 3D Texture nodes to produce repeating tiles. This is a very esoteric thing, and is only needed if you're using Poser to generate seamless tiles that can be used as image-mapped textures for other purposes. Within Poser itself, it is generally better to take advantage of the infinite-non-repeating nature of the 3D textures.

I hope you've found this interesting. If something is unclear or you can't quite imagine how to produce a specific effect using this information, come back and ask. I'll post example renders and shader setups.


Renderosity forum reply notifications are wonky. If I read a follow-up in a thread, but I don't myself reply, then notifications no longer happen AT ALL on that thread. So if I seem to be ignoring a question, that's why. (Updated September 23, 2019)