Forum: Poser - OFFICIAL


Subject: Early SAMS3D Models in Poser 5

Mitch1 opened this issue on Nov 09, 2003 ยท 10 posts


ronstuff posted Mon, 10 November 2003 at 9:27 PM

Mac,
This is one of those 3D modeling concepts that is difficult to explain clearly, but simple to understand once you know how programs create 3D meshes. Anyway, I'll try ;-)

All surfaces start with a single polygon, and the most simple polygon that defines a surface is a triangle. So if you imagine that you have a sheet of paper, and you draw 3 dots (points) randomly anywhere on that paper and you will have the beginning of a polygon with 3 Vertices.

But to actually draw the polygon surface, you must connect the dots with lines and fill in (shade) the enclosed area.

Now imagine that you are connecting those dots on the paper. No mater which of the three points you use to start, there are only two ways you can connect to the remaining dots without lifting your pencil from the paper. You can connect them in a clockwise direction or a counter-clockwise direction. In fact when you originally placed the dots (if you numbered them dot1, dot2, dot3) you would see that there was a rotation order which was actually defined when the dots were originally placed.

Now, to a computer, 3D meshes are just a collection of points in 3D space that are defined within the mesh as a collection of one point after another in a long string (the last point in polygon#1 becomes the first point in Polygon#2 etc.) and here too there are only two rotation paths possible to wind you way through the entire mesh: in a clockwise or counter-clockwise spiraling manner (this is known as Winding Order and it is defined at the time each polygon is created in a modeling program).

The actual SURFACES do not exist in the mesh itself, but are filled-in (shaded) only at render time. Now any given polygon can have 2 faces that might be shaded (the front face - sometimes called normal face and generally meant to be the one facing the OUTSIDE of an object - and back face or the one facing the INSIDE of the object).

So, it is important for the computer to know which of the two faces to render because it would be a huge waste of resources and time to make the computer do calculations for BOTH surfaces of each polygon when 99% of those back faces can not be seen anyway because they are on the INSIDE of a closed object. So, by convention (and for some technical reasons too) only ONE of the two possible polygon surfaces are rendered --- but which one?

Early on in the history of 3DCG modeling, a single convention for determining which of the two faces to render was adopted by MOST but not all modeling programs. That convention is called the Counter-clockwise rule. In other words, if you place 3 points on a piece of paper (or 3D space) and draw the points in a counter-clockwise pattern, then the face that is facing YOU is called the normal face and is the one to be shaded. If you draw the points in a clockwise pattern, then you will be looking at the BACK face of the polygon which is not always shaded by the renderer.

A simple way of visualizing winding order and face normals is the Right Hand Rule - Hold your right hand in front of you, with fingers curled and thumb sticking up. If the fingers point in the direction of the winding order (point-drawing order) then the thumb points in the direction the normal face will be facing. This is called the normal (note the small "n") for the face.

Now all of this is fairly easy to visualize and control when dealing with triangles and simple primitives, but for more complex shapes, winding order is a real PITA to have to manage manually because there are many deformations that can cause the winding order to be changed in a mesh even if it starts out in the right direction.

So, modeling programs found alternate methods to describe which surface should be rendered - not to replace the winding order method, but to make it easier to visualize, control and correct in the event of a conflict. This is ADDED information in the mesh in the form of extra points for each polygon which describe the DIRECTION of the normal face (irregardless of winding order) - this is called The Normal of the polygon, or generally speaking, "Normals".

The only problem with Normals (capital "N" meaning information ADDED to the mesh) is that there are several ways to create them, and not all programs agree on which is the standard way. You can make them based on the Vertices, or based on the Polygon, and there might be other ways. This is not a problem as long as you model and render in the same program, it doesn't matter - your "Normals" are managed for you and generally look proper when rendered in THAT program. The problem arises when trying to render that mesh in a program that uses a different method of describing Normals.

Poser is designed to import a wide variety of mesh types, but because of differences in Normals, rather than trying to figure them out, Poser just ignores them and creates its own normals based on the good-old-fashioned winding order. Note carefully here that Poser DOES use normals (re-calculated based on the winding order of the mesh) but DOES NOT use Normals (the extra embedded data in the mesh).

A good modeling program (IMHO) is one which insures that Normals and Winding-Order always agree with the widely accepted standards. If you flip the Normals, the program should adjust the winding order correspondingly. Unfortunately not all modeling programs do this. They are sloppy about winding order because they think that the "Normals" data will still insure proper rendering, and as long as you keep your mesh in that program they are right. But they arrogantly insist that if you want to render THEIR mesh in a different program, then THAT program must recognize THEIR methodology even though it is sloppy.

And THIS is the reason Poser (rightfully) ignores embedded Normals because they are sometimes unreliable, and can slow render times significantly. Sticking with Winding Order normals makes for faster and more reliable rendering, but it does place extra demands on the modeler.

So, to answer your question...
Poser will NEVER use the Normals data embedded in the mesh file, so you might as well NOT export it from UVMapper because it just bloats the file. So THESE are not the cause of the problem illustrated here. The Winding Order, which Poser DOES use for normals information is determined at the time the mesh is created, and THAT is what is being reversed when people refer "reversed normals" in terms of Poser.

Oddly enough, when you export from Poser, polygon-based Normals are restored to the mesh. But these may be DIFFERENT from the normals present in the original mesh which was imported into Poser, because when Poser exports it creates Normals based on the Winding Order - so they are in agreement even if the ones in the original mesh were contradictory. This is a good way to "fix" normals that are contradictory to winding order, but there is no way to alter the winding order itself in Poser.

Sorry for the long-winded explaination - some day I might get around to making nice graphics to illustrate it, but till then, I hope this makes sense. ;-)