Forum: Poser - OFFICIAL


Subject: Interpretation of W coordinate in Daz's M2 & V2 UV's?

kuroyume0161 opened this issue on Jul 07, 2002 ยท 28 posts


kuroyume0161 posted Tue, 09 July 2002 at 8:29 AM

ronstuff, I do understand the difference; I'm proficient enough at mathematics, especially when related to geometry and 2D/3D space (I did write the 3D engine being used in my application). Let me tell you where the UV comes from (you might already know this, so excuse my hubris). In Cartesian space, points in that space are denoted by three real values, coordinates, artificially mapped mutually perpendicular to one another (but not always). These are located along these mutually perpendicular axes, denoted X,Y,Z, of the Cartesian space, a point located with the triple (x,y,z). In Vector space, there are no 'points'. There are vectors. These vectors need not be mapped mutually perpendicular to one another. The cardinal direction vectors for the Vector space are denoted U,V,W (where have we seen those before?) and by default are normalized to meet at a cartesian coordinate origin (0,0) and have magnitude of 1.0. A place in Vector space is given a vector originating from the intersection of the defining vector's origin which involves a magnitude and direction. In order to place a vector in Vector space not directly attached to the origin requires a displacement vector from the origin to reference it (like an offset). With that said, I realize that the use of the UVW vectors for texture mapping is different than XYZ coordinates, but the theory and implementation remains the same. Whether you are defining XY/XZ/YZ points or UV vectors, they both describe the same thing - a plane. Not a coordinate system, but a geometric construct which could have any number of coordinate systems mapped onto it. This is analytic geometry 101. The W is not even necessary for this discussion any further since its relevance has been discounted. So, although the UV 'textures' described on the plane can be sorted out to reference different texture maps by utilizing extra information (stored in the facet references), they still all lie on the same plane. There is absolutely no way, without an artifice like Poser and human interaction, to distinguish which UV vertices represent one texture reference or another when overlapped; there is no distinguishing information otherwise. Like I said, when you consciously select a particular texture map image for V2's head, for instance, you have made the choice between which overlapped set becomes relevant. In essence, you say "head vt's, meet head texture map (or any other map)." In my situation, the overlapping head vt's and body vt's are in the same box, all jumbled together and indistinguishable (without some hefty work - and still flawed as Ajax pointed out). I cannot select which set references which image since my app is not referencing an image from a particular set of vt's, but creating an image for all of the vt's (or all of those of interest to the user, at least). The alternative that would work would be to create a separate image for each occurrence of 'usemtl' or 'g' in the .obj file, but that would be costly and wasteful. Load the standard Vicky2 into UVMapper. My application creates texture maps from what you see. How do I create two required images from that (which is how Daz has set this up)? What distinguishing features tell me which set of facets must be sorted out to one map and not another? In Poser, you do this consciously by selecting an image to map onto the desired material region. ************ The use of the W is known to me, but I made this inquiry thinking that maybe Daz was using it nonstandardly to distinguish the different texture spaces used by simple differentiation of the values, which would have greatly simplified the situation. Now I am left with probably one viable alternative that is less fallible, less costly, and removes the need for not-so-extensible programming practice (hardcoding something which could change at any time), but requires the user to have some foreknowledge and do some extra work. That is to notify the user that if this kind of situation exists (overlapping texture spaces), they must select the 'regions' that make up a single texture space to create the texture map image for each overlapping texture space. In other words, Vicky2 would require the user to select the head and create a texture map image, then select the body and create another texture map image so that they do not end up in the same image (and overlapped). I hope that I haven't been to brash, here. It's just that the minefield was planted when I incorrectly expected the W coordinate to be my savior by possibly having a significance that it normally doesn't have. Instead, I should have inquired about the basics of multiple texture map images for a single .obj file and avoided the consequences. ;) At the least, this has spotted a potential brick wall early on in development of the application and allows some time to consider viable alternatives. Once again, thank you all for your exquisite inputs. Now, back to work! Kuroyume P.S.: I had the misfortune to be watching the news yesterday morning when the weatherperson starts discussing sunburn alerts. The sweeping title: "UV Skin-dex". Now, if that doesn't make you wonder...

C makes it easy to shoot yourself in the foot. C++ makes it harder, but when you do, you blow your whole leg off.

 -- Bjarne Stroustrup

Contact Me | Kuroyume's DevelopmentZone