Wed, Nov 20, 2:30 AM CST

Renderosity Forums / Poser - OFFICIAL



Welcome to the Poser - OFFICIAL Forum

Forum Coordinators: RedPhantom

Poser - OFFICIAL F.A.Q (Last Updated: 2024 Nov 18 10:25 pm)



Subject: Interpretation of W coordinate in Daz's M2 & V2 UV's?


kuroyume0161 ( ) posted Sun, 07 July 2002 at 7:48 AM · edited Wed, 20 November 2024 at 2:27 AM

Hey all, Does anyone have any idea on how to interpret the W coordinate in Daz3d's Michael and Victoria .obj files? It is obvious that they are using the third UV(W) coordinate to separate the head and body texture space, but the values makes no sense to me (0.011486 and 0.000000) for the most part. Why not just 1.000000 and 0.000000? How does Poser interpret this coordinate value? I'm going to look at the CR2 files for them and see if there are any clues there. Thanks for any information leading to the arrest of my ignorance in this. Kuroyume

C makes it easy to shoot yourself in the foot. C++ makes it harder, but when you do, you blow your whole leg off.

 -- Bjarne Stroustrup

Contact Me | Kuroyume's DevelopmentZone


lgrant ( ) posted Sun, 07 July 2002 at 10:28 AM

My blMilWom.obj file has all zeros for the third codinate of the vt statements. My blMilMan.obj uses 0.013357 and -0.699724. The Alias/Wavefront specification for OBJ files defines the W codinate only for 3D textures, and the texture maps we use are only 2D textures. I would wonder if the third codinate is garbage introduced by some OBJ manipulation program. I have worked with at least one file converter that would stick garbage W codinates in if the original file had only U and V. Perhaps DAZ at some point copied the body and head separately, then merged them. Just a thought.... Lynn Grant Castle Development Group


darkphoenix ( ) posted Sun, 07 July 2002 at 12:13 PM

the W coordinate is not used in the m2 uv coordinates. All textures are mapped with a coordinate of 0 for (W), the maps for their faces and others are mapped on the same coordinate, just two seperate texture templates were made. Thats why the head and body are seperate materials. You can apply a map for each material of the m2 figures, you dont need just 2, and they will still all be on the same coordinate, because the material is only looking at the coordinates assigned to it, not what else is on that plane.


darkphoenix ( ) posted Sun, 07 July 2002 at 12:16 PM

heh, sorry, posted too soon. basically, what i was saying is you can have coordinatthat intersect on a uv template and it wont matter because the material will ignore anything outside of its own personal coordinates, so it doesnt matter if the head and body are overlaping on the w coordinate, when you tell the head to use a map it will ignore the rest. The m2 figures just use different maps so you can get higher resolutions for the head on the second map, the body coordinates are still on the same map but the face ignores them.


kuroyume0161 ( ) posted Sun, 07 July 2002 at 2:31 PM

Okay, but let's say that someone loads the entire M2 geometry (which has only one set of vt's but is treated as two separate texture 'spaces' as it were) into my app and wants to create textures from a full-person photo. How do I determine A) that more than one texture map is needed to be output (as compared to a standard P4 figure or whatever other fiendish geometries are loaded) and B) which vt's represent which texture map of a plurality. Now, if both the head and body are mapped into a single space, but separated into two files, then there would need to be an ~offset~ to correct for this since each file usually represents a 0.0->1.0 U and 0.0->1.0 V plane. Correct? So, my texture output file may end up being a single image containing the head and body (as if you took the separate head/body texture images and adjoined them into a new image file)? UV coordinates and texture space make sense to me, but you're going to have to explain to me how this is applied over multiple texture files with a single texture space defined. If you think that you can show this graphically, that might help greatly. Kuroyume

C makes it easy to shoot yourself in the foot. C++ makes it harder, but when you do, you blow your whole leg off.

 -- Bjarne Stroustrup

Contact Me | Kuroyume's DevelopmentZone


gryffnn ( ) posted Sun, 07 July 2002 at 3:37 PM

Unless I'm misunderstanding your question, it seems you're overthinking this. You tell Poser which map to look for the texture for each part - like V2Head.jpg and V2Body.jpg (or GizmoLeft.jpg, GizmoRight.jpg and GizmoTop.jpg). If you switch the file names, parts of the body texture will be mapped to the head, and parts of the head texture mapped to the body. You'll end up with a patchwork because the images on each map haven't been drawn to fit the appropriate body part.


ronstuff ( ) posted Sun, 07 July 2002 at 4:27 PM

Maybe they are using the W coordinate as a unique identifier for each figure - possibly to eliminate "texture crosstalk"??? Just a guess ;-) If we edit the CR2 and create a new material, like SkinBody2, and give it a different W coordinate, do you think we could create a 2nd layer, like for body hair???


Nance ( ) posted Sun, 07 July 2002 at 4:42 PM

The obj file does not "know" that you will be using two maps. This is just a trick you pull on it so that you can re-use the same UV space by overlapping material assignments. The same concept as Tiling a surface -- different areas of a mesh are textured from the same area on the UV layout. This is all done to permit the use of multiple, smaller maps to achieve the same pixel resolution as much larger, resource-hogging, single layered maps. And Poser seems to prefer chewing on several small maps rather than one huge one.


ronstuff ( ) posted Sun, 07 July 2002 at 5:14 PM

Thanks Nance! Maybe you can also answer this question which has always made me wonder: If you have a head texture that is 2000x2000, and you have a separate eye texture that is also 2000x2000 (but mostly all white space due to the positioning of the eye portion) and a third separate 2000x2000 texture for, say the lips (again, mostly white space) - The question is, does Poser load data for 3 FULL maps (each 2000x2000)including all that white space, OR does poser only load the area of the map that is actually used by the figure? I have alweays been hesitant to use all those external eye and lip textures because I want my renders to be as fast as possible, and assumed that loading all that white space into memory just slowed things down unnecessarily. It would be nice to know for sure that this is not the case (which you seem to imply above, if I understand you). Really curious...


Ajax ( ) posted Sun, 07 July 2002 at 7:21 PM

Nance is right. However, what Nance says does not in any way imply that the white space doesn't clog up Poser's memory. It does. In the example you give above, ronstuff, Poser does indeed load 3 full 2000x2000 maps. Think of it this way: Each material has it's own UV map. Traditionally in the 3D world, each material would therefore get a separate texture (ie jpg or whatever file format the application uses). In Poser, it's common to use a little "trick" of UV mapping each material so that it occupies only part of the texture. In this way you can organise the texture in such a way that different areas of it correspond to different materials. This is useful if you would prefer to distribute only one texture with your model instead of one texture for each material in the model. It's a trick that is However it also sacrifices some of the versatility of the one-texture-per-material method, since it doesn't allow you to make a high res texture for part of the model without having to load a high res texture for everything else as well. (Hence your problem of having to load effectively three high res heads just to get your eyes and lips from different textures). In the case of the millennium figures, Zygote/DAZ decided to compromise between the "Poser approach" and the "traditional approach" by placing all of the materials on two textures. This way, you can load a high res head without having to load a high res body. However the ability to keep a separate high res set of eyes or separate high res lips was lost because they organised those on the same texture as the head.


View Ajax's Gallery - View Ajax's Freestuff - View Ajax's Store - Send Ajax a message


darkphoenix ( ) posted Sun, 07 July 2002 at 8:20 PM

personally, i would remap the eyes and any other parts you want a high resolution for so that it takes up the whole uv space, and then use a smaller map. a 700 x 700 texture map in which the eyes take up the whole map is gonna be better than a 3000 x 3000 map in which the eyes are still only taking up 500 x 500 pixels of space on. I usually map my figures into 5 seperate templates, one for the body, one for the face, one for all the other head textures, one for the eyes, and one for everything you want to make transparent (eyelashes, eyebrows, pubic hair, eyeball) That way you can have 5 1000 x 1000 texture maps that still have a higher resolution for most materials than the 3000 x 3000 and 5000 x 5000 hi res maps. Of course, a large amount of ram will still help.


ronstuff ( ) posted Sun, 07 July 2002 at 9:53 PM

Thanks guys for the info about Poser - it turns out to be just what I expected, but nice to have it confirmed. The reason I asked, is that I develop for 3D games, and when it comes to real-time rendering - game engines seem far more advanced than Poser (and even some of our 3D tools). For example, textures are loaded once to read them into memory and map to each polygon, but what is cached is ONLY the portion of the texture actually applied to a polygon. This saves a lot of memory and processing, and allows for separate textures at different resolutions, or single textures for everything of one resolution. Also, this is the way most video cards with 3D hardware acceleration are designed - to cache only the actual applied texture, not the whole map. By the way, to the best of my knowledge, there is nothing explicit in UV mapping that dictates whether it supports one or many different source maps - it's just a way of mapping/linking image data to a polygon (or group of polygons) - it differs from what you call "standard" mapping because it is polygon based, compared to being object based. I wouldn't call it a "trick" at all, and certainly not a just the "Poser approach" - it is a more accurate form of mapping that has been brought about by improved technology, and is generally superior to clunky old object/material mapping.


kuroyume0161 ( ) posted Sun, 07 July 2002 at 10:24 PM

Alright, from what Nance and darkphoenix are saying, the W in these files is inconsequential (for all we know, anyway) and the UV's for the head and body still fill out an entire UV space, but that space is 'tiled' or partitioned into separate files for ease of use (by Poser in this instance). Is that the idea? If it is, then I still have to ask how my app can go about determining the partition. The application loads an image (ideally, a photo of a person) and a figure geometry (.obj file). After the user matches the two, it will produce a texture map of the covered parts of the image to fit the texture vertices. IOW, the image will be pseudo-unwrapped onto the UV plane. For simplicity, I could have the user split the texture map image into its respective images for Poser, but I'd rather that it produce the texture maps in as many files and correct partitions as Poser would expect for the geometry used. I guess the crux of this confusion is how Poser knows that a Daz body texture map only maps to the body and not to the entire figure incorrectly (since, by default, a texture map image covers the UV space, in at least one dimension). It couldn't possibly know the offset in UV space to begin the offset map-section without it being mentioned someplace. As you said, Nance, the obj file knowns nothing about the UV space partitioning. As a matter of fact, the fact that the skin is divided in the materials as Skin Head and Skin Body reveals this. Pulling this and what Ajax has just said, it appears that the missing link resides within the material assignments. The question may be, "Where are they?" I believe this information would reside in the CR2 file; but they are huge files. Even Word on a dual 733MHz CPU and 768MB RAM is sucking wind with these files. Methinks it's going to require some deep study of the CR2 layout for precision in finding them. :( Kuroyume - a needle in a haystack ne'er looked as good a proposition.

C makes it easy to shoot yourself in the foot. C++ makes it harder, but when you do, you blow your whole leg off.

 -- Bjarne Stroustrup

Contact Me | Kuroyume's DevelopmentZone


Ajax ( ) posted Sun, 07 July 2002 at 10:29 PM

Ronstuff, Not sure what you mean by that last paragraph. Within the Poser context, each material can have only one texture (unless you count bump maps and transmaps). I'm not sure what you mean by a "standard mapping" or "object based". I've only been talking about UV mappings, not other types. It's just a question of whether you use an approach that makes each material occupy most (or all) of UV space (the traditional approach) or an approach that arranges all materials in non overlapping parts of UV space (an approach that I tend to think of as the "Poser approach" mosty because it's so vastly overused in the Poser community). There is no technological difference between the two. They use exactly the same technology. They are also both equally accurate. When you would use which approach really depends on what you're trying to do. If you're making a specific character for a movie or game, it makes sense to put everything on one texture because then it's easy to keep the character looking the same in every scene, no matter which animator is doing that scene. It's a convenience. If you're making a model that you want to look different every time (eg some environment models for games or just about anything for Poser) then it makes sense to put different parts on different textures so you can play mix'n'match, changing the texture of the columns and walls in a building while keeping the floor the same, or changing Vicki's eyes and lips etc. A lot of Poser models that were done with the "Poser approach" would have been a lot easier to customise if they'd been done with the "traditional approach". Changing the subject a little, Poser has in the past only allowed UV mapping. Judging by the pics kupa posted, it looks to me like Poser 5 will at last allow other mapping approaches, similar to the range of options available in Bryce perhaps. For architectural models, that could be very useful.


View Ajax's Gallery - View Ajax's Freestuff - View Ajax's Store - Send Ajax a message


darkphoenix ( ) posted Sun, 07 July 2002 at 10:37 PM

the cr2 only lists the materials in the uv coordinates for the obj file, the actual material mapping is in the obj itself. The cr2 is only poser pulling the materials off so it can load it into the material editor and shader. As far as partitions, there is no such partition, just different maps being used. Im still not clear on exactl what question you are asking. You have to assume we know little about the program you are referring too. Poser does not know that a texture maps to the body and not the entire figure. you have to tell it this when you assign the map to the material. As i stated, the materials are located on the object file when you uv map it. The materials being divided into skin head and skin body are the actual coordinates being assigned HEY, this material is the "skinhead" when te uv mapping is done. If you load the object into uvmapper, lithunwrap, or any or several uvmapping utilities, you can see exactly what materials are assigned to what coordinates, and reassign them yourself. Am I helping any? please, try to make it as simple as possible and explain exactly what it is you are looking for.


darkphoenix ( ) posted Sun, 07 July 2002 at 10:39 PM

It looks like poser 5 will allow procedural texturing


Ajax ( ) posted Sun, 07 July 2002 at 11:06 PM

"I guess the crux of this confusion is how Poser knows that a Daz body texture map only maps to the body and not to the entire figure " Poser doesn't know anything about it. The user simply tells Poser which materials should draw on which textures. And the body texture maps to anything you want it to. It's just a picture. There is no partition of UV space, it's just that some materials have been mapped in such a way that they don't overlap others, so you can (but don't have to) tell them to use the same texture. There's nothing in the obj that would tell you how those materials are grouped. That bit only exists in the heads of human users. The best you could do is check which materials overlap each other by comparing their UV coords. It wouldn't be a complete solution though, because there are undoubtedly some materials on the body map that don't overlap with some materials on the head map. The program would think they belonged on the same map because they don't overlap, but it would be mistaken. The material assignments can be anything and will vary from one copy of a figure to another. Most (but not all) Vicki characters would follow the two map setup that DAZ designed Vicki to use. Some would have extra maps for nails, eyes, lips etc. A very few might only use one map if they're trying for some unusual look (like basket weave skin or brick skin or leaves or something). Some would have no maps at all. The material assigments are easy to find in the cr2. They're right at the end. You'll see that each material has a separate texture reference of it's own. While you could set your program to check which materials are calling the same texture, unless you control which cr2 the user loads you don't really have any control over the conclusions your program comes to. My suggestion would be to hard code a Vicki setup that recognises the Vicki model and sorts the materials into the groups DAZ intended them go in. You would also need a Mike setup, a Steph setup etc if you chose to go that route. UV mapper has the ability to assign materials to "regions" and store that info in the model(each region being a group of materials that belong on the same map - but again the user decides how to do that, not the program). I'm not sure if that's a standard part of obj file language. I do know that not every model makes use of it. Whether the Millenium models were set up with regions, I couldn't tell you. It would be worth checking out, because if they do have regions, that would be a complete solution to your problem.


View Ajax's Gallery - View Ajax's Freestuff - View Ajax's Store - Send Ajax a message


ronstuff ( ) posted Mon, 08 July 2002 at 8:23 PM

Ajax, a lot of people look up to you in this community, so you should be careful with your lecturing others when you clearly don't understand the fundamental difference between UV mapping and all other kinds of mapping (and there are several OTHER kinds of mapping). Without at least understanding what UV(W) mapping really is, the question raised on this thread about the significance of the W coordinate cannot be addressed. I don't wish to be argumentative, and I sincerely want to know how Poser processes the UV information (because it clearly IS a bit different from other programs). The difference between Poser and other programs is NOT however, what you suggest and has nothing to do with UV mapping per se - that much I KNOW. Here is the simplified example of the real difference between UV mapping and other mapping: Let's say you have a simple 3D object like a cube that normally has 6 sides or faces (triangulated = 12 faces). With non-UV mapping we can apply a material (texture map or not) to any of those 6 faces and that material will automatically be stretched (or tiled - or "decaled") from diagonal corners of the face (ie: the only coordinates referenced in the "mapping information are the 2 diagonal points in space that define the extent of the "surface") - sounds simple enough, and at this point would appear identical to UV mapping which would do the same thing. But if we take one side of the cube and subdivide the polygons to get a higher density mesh we end up with a rectangular plane that has a "grid" of vertices within it. Then we start moving those vertices in the middle of the grid around (morphing them - but leaving the outside square edges alone) Here is where we will see the difference between UV mapping and all other mapping. Lets imagine that you had a smiley face mapped to the side of the cube. With UV mapping applied, if you move the internal grid of points around, you could make the smile turn to a frown or raise an eyebrow. Without UV mapping, however the smiley face would keep smiling and staring straight at you, because it is only mapped corner-to-corner on the face of the cube. And THAT is essentially what UV mapping is: A method of mapping a texture to individual vertices rather than to an area defined by a material. To say that you wish Poser would use the "other" method of mapping is silly, because realistic human skin texturing just would not be possible without UV mapping - not to mention the fact that the texture would not "follow" a morphed figure. Now I agree that Poser IS different from some other apps that support UV mapping because it combines materials onto a single image map. But Poser is NOT alone in this - many 3D games do the same thing for a very good reason - it speeds up rendering and reduces processor load. Even so there is nothing explicit about UV mapping that says you have to have one texture or many textures - it is just a choice made by the person mapping the original mesh. The downside in Poser is that changing just one element of that map like a lip texture becomes a pain in the butt, and requires either your merging your favorite lips with your favorite skin OR wasting a lot of processor power loading a mostly blank texture with just lips on it OR remapping the lip material of the figure thus creating a new obj file. You are right that in architectural modeling, non-UV mapping is preferred, because it is great to tile a small concrete texture over a large area, and produce a realistic result. The human body, however is nothing like a sidewalk or a stucco wall because repetative or tiled skin textures look unrealistic and flat. Ground textures and some clothing items could be a different story, and I sometimes wish it were easier to "tile" a small sample texture within Poser - but I would never trade UV mapping for that. You're also right about it looking like Poser 5 will support material procedures, and shaders (and all that implies like tiling and fractal methods) but I guarantee that they don't do it at the expense of UV mapping. Now the real question is whether all this power will be part of the built-in rendering engine, or is it just a call to an external render app (like renderman) that we have to purchase separately? Even so, it looks like a MAJOR overhaul of Poser, and I'm twitching with antici........ ...pation.


kuroyume0161 ( ) posted Mon, 08 July 2002 at 8:54 PM

Sorry that I started a war :), but I took the liberty of loading Vicky baby into UVMapper and NOW it makes sense to me. This makes it even more tricky, since the UV's for the head and body overlap in UV space without delineation, it will be extremely difficult to know to create separate images. Instead, my app, as it would basically work, would create a mish-mosh map (that's a new type of texture map) with the face and body overlapping in a single image. Not a good thing. So, the trick works going onto the 3D object, ala Poser, but it wreaks havoc going the other way (obviously no problem when just using the texture template image). Ajax, your tentative solution of checking material groups for overlap crossed my mind, but I agree that it is not a bullet-proof solution. The other possible solutions will be checked out. Never saw regions in .obj files. There are lots of fields not used by Poser and I haven't seen any of them in any of the third-party objects (curves, patches, surfaces, etc). Those that are there are v, vt, vn, f, g, s, usemtl. None of these will get me that crucial information. This may need some indepth discussion with Daz to find a simple resolution. Thanks all of you for being patient and trying to explain how Daz's divided texture maps actually work. Sorry that it took so long to think about loading the geometry into UVMapper and see it for myself. Where's Poser5 to the rescue? ;) Kuroyume

C makes it easy to shoot yourself in the foot. C++ makes it harder, but when you do, you blow your whole leg off.

 -- Bjarne Stroustrup

Contact Me | Kuroyume's DevelopmentZone


darkphoenix ( ) posted Mon, 08 July 2002 at 10:09 PM

the simple solution to your problem would be to remap vicky so that none of her materials are overlapping. Since the millenium figures current mapping is not too bad and remapping them without spreading the mesh faces would be a pain, I recommend simply leaving the current mapping coordinates, but moving them around in the uv space and scaling the materials so that they dont overalp. this would make the m2 texture template very similar to an original p4 template. It would also allow you to use a single texture map instead of two or more when you were texturing the figure. After remapping and rescaling the figure, save the new texture coordinates and design your app so that it applies the saved coordinates to the mesh instead of using the current ones. This will overwrite the current coordinates saved into the mesh with the new ones. It will only work if the mesh your user is using is the same mesh as the one you made the coordinates from (morphs can be applied, it just has to have the same number of vertices and faces.) You can assign regions to uv coordinates, however this is not necessary to determine what material goes on which map, as the templates just tell you where the coordinates are and the user is the one that assignes the maps. These regions may be usefull to other 3d painting programs when you are determining layers and creating generated maps from the program, but are not recognized by poser whatsoever as far as I can tell. However , you might be able to use this to your advantage for your app, though I dont know jack about programming so I couldnt tell you. For that, someone who works for right hemisphere might be able to tell you, but Im clueless.


Ajax ( ) posted Mon, 08 July 2002 at 10:17 PM

Ronstuff, I think we're talking at cross purposes here. I haven't said anything about differences between Poser and other programs. I have only been talking about Poser. One thing I did say is that Poser users tend to follow a certain philosophy of UV mapping more than users of other programs do. It's a difference in the users and their philosophies, not in the programs. Believe me, I do know and understand the differences between different types of mapping. However, apart from a couple of passing references, I haven't talked about different types of mapping. In my disscussions in this thread I've only talked about the difference between two different philosophies of UV mapping. They're both UV mapping. They both use the same technology and are processed in exactly the same way. The difference between them is in the approach taken by the person creating the mapping, not in the code that processes it. I think you may have misinterpreted me to be saying something about UV mapping versus some other type (such as object space mapping, or world space mapping). I'm not. I'm ONLY talking about UV mapping. I'm just talking about differnt ways of using it. I think I also misinterpreted something you said earlier. That final paragraph in post 12 confused me because it clearly refers to something I had said previously but whereas I had been talking and thinking exclusively about UV mapping, on rereading I can see that in that paragraph you're talking about object space mapping. The reference to my earlier comments threw me off in my interpretation of what you were saying. I never said there was anything unique to Poser about these two approaches to UV mapping that I've been talking about. I just said that I tend to think of the "everything on one texture" approach as the "Poser approach" because people creating models for Poser often do it even when it would make more sense to tackle their UV mapping according to the other philosophy. As you've pointed out, the all-on-one-texture approach makes a lot of sense for human figures. On the other hand it makes very little sense for architectural models. Yet, within the Poser community you very often see architectural models that use the all-on-one-texture approach even though they don't need to. The maker could just as easily have assigned a different texture to each material and made their UV mappings with that in mind. I also didn't say anything about wishing Poser didn't use UV mapping. I simply said that it looks like Poser 5 will introduce more options and I think that's a good thing. Sorry about the long post. I just hate misunderstandings. I'm hoping we can clear up this one.


View Ajax's Gallery - View Ajax's Freestuff - View Ajax's Store - Send Ajax a message


darkphoenix ( ) posted Mon, 08 July 2002 at 10:42 PM

Personally, I think the way most poser maps are set up is pretty shoddy myself, but then you realize most poser users dont know anything at all about uv and that also for community purposes textures are supposed to be simple to share and distribute and that most poser users just want to apply a texture, not worrying about how to set one up. This method also makes it easier to apply any poser made texture to any poser made figure without having to texture every one. The only real problem is when you try to make hi res or photorealistic textures for poser using the current mapping scheme. Thats why I always remap my figures whenever I bring them into another program. Also, I personally think the way materials are assigned in poser have a lot more to do with being able to change the color of the materials and with being able to mix and match different maps from different people a lot more than any actual functionality as far as the texturing goes. If you try to use the same methods in 3ds Max or Lightwave that Poser uses, your results are not usually as good as you would expect, which is what I believe Ajax was referring to when he was talking about the "Poser approach" Personally, I have to remap and create new textures for every poser figure I export just because the way poser does it simply is not as effective.


ronstuff ( ) posted Mon, 08 July 2002 at 11:10 PM

Kuroyume, I appreciate your dilemma, and certainly mean no disrespect. It is all a bit confusing. One thing that might help you is understanding the differences between the UV(W) and XYZ coordinate systems, and the reason why we use UV mapping instead of XY mapping. I bring this up because from the sound of what you are trying to do, you seem to think that UV is the same as XY, and it is not. Consequently, something which looks like "overlapping" textures when they are displayed in an XY fashion (as is done in UV Mapper), are not really "overlapping" in UV space at all. The problem is that we humans can't really see a "UV" surface the way a computer does, and we call the coordinates "U,V" instead of "X,Y" to remind us that there IS a difference. To understand this difference, imagine a single point in 3D space. The precise location of that point requires 3 coordinates to describe it completely. We use the X, Y, Z coordinate system for locating that single point. In any given universe (Poser, 3D MAX, Bryce etc) those coordinates consistently represent the length, width, depth values for that system. For example, if X is width and Y is height (as in Poser) then X ALWAYS = Width and Y ALWAYS = Height when viewed from the same perspective in that system. UV coordinates do NOT behave like that. UV coordinates SOMETIMES represent mapping in the XY plane and SOMETIMES mapping in the XZ plane and SOMETIMES mapping in the YZ plane (and even other types of mapping as well), but they are all still valid UV values because they DO correspond to the XY coordinates in our 2D texture map, and thus U ALWAYS = X and V ALWAYS = Y in 2D SPACE ONLY! but not in 3D space. So instead of thinking of UVW mapping like XYZ mapping which is described by 3 spatial coordinates - think of it as a different beast all together which is described by TWO coordinates and a DIRECTION. It still requires 3 pieces of information to place it in 3D space, but the information is not the same as is used to locate a vertex of a 3D object. So where does that THIRD piece of information in UV mapping come from? It is the "method" used to map any group of vertices (Planar XY, PlanarYZ, spherical cylindrical etc), and it can be stored with each vertex OR for a whole group of vertices in one reference, but don't confuse this with the "W" coordinate as that is something else (a 4th dimension, if you will - and it is sometimes used to describe "layers"). Some programs store this 3rd piece of UV location information (not the W value) with the NORMALS and some store it with the vertices, but it is always there somewhere, or the map just can't be applied with just the UV coordinates alone. Good luck with your project - it sounds really interesting, and I hope there is something here that "sparks" your imagination to continue your efforts. Admittedly, this is a simplification and not necessarily a "technical" assessment, but more of an overview of the basics. I still don't know the exact details of the "Poser method" myself, but I understand the principles upon which it is built. Best wishes to all - I did not mean to step on any toes.


ronstuff ( ) posted Mon, 08 July 2002 at 11:30 PM

darkphoenix, you make some valid points. Clearly, poser 4 has inherited a lot of baggage from its predecessors. And texture mapping has come a long way since Poser 1 and 2 which mostly relied on material colors alone. When UV texture mapping became more practical, it was merely added to the system which was not necessarily organized to take full or best advantage of it. But UV mapping is definitely an ADVANCEMENT in technology, and when properly used provides much more accurate placement of maps onto 3D objects than any other method. If precision placement is not an issue, then there are far more practical (not necessarily BETTER) methods of "texturing" an object for rendering. I also agree that a LOT of stuff in the market is not very will mapped, but please don't blame the UV system or technology for that, it is just the lack of talent on the part of modelers, who spend all their time making a mesh and then rush through the mapping process like it is a waste of their time. I think if they really understood the potential of the tool in their hands (and how it differs from other texturing methods) they would produce better results.


kuroyume0161 ( ) posted Tue, 09 July 2002 at 8:29 AM

ronstuff, I do understand the difference; I'm proficient enough at mathematics, especially when related to geometry and 2D/3D space (I did write the 3D engine being used in my application). Let me tell you where the UV comes from (you might already know this, so excuse my hubris). In Cartesian space, points in that space are denoted by three real values, coordinates, artificially mapped mutually perpendicular to one another (but not always). These are located along these mutually perpendicular axes, denoted X,Y,Z, of the Cartesian space, a point located with the triple (x,y,z). In Vector space, there are no 'points'. There are vectors. These vectors need not be mapped mutually perpendicular to one another. The cardinal direction vectors for the Vector space are denoted U,V,W (where have we seen those before?) and by default are normalized to meet at a cartesian coordinate origin (0,0) and have magnitude of 1.0. A place in Vector space is given a vector originating from the intersection of the defining vector's origin which involves a magnitude and direction. In order to place a vector in Vector space not directly attached to the origin requires a displacement vector from the origin to reference it (like an offset). With that said, I realize that the use of the UVW vectors for texture mapping is different than XYZ coordinates, but the theory and implementation remains the same. Whether you are defining XY/XZ/YZ points or UV vectors, they both describe the same thing - a plane. Not a coordinate system, but a geometric construct which could have any number of coordinate systems mapped onto it. This is analytic geometry 101. The W is not even necessary for this discussion any further since its relevance has been discounted. So, although the UV 'textures' described on the plane can be sorted out to reference different texture maps by utilizing extra information (stored in the facet references), they still all lie on the same plane. There is absolutely no way, without an artifice like Poser and human interaction, to distinguish which UV vertices represent one texture reference or another when overlapped; there is no distinguishing information otherwise. Like I said, when you consciously select a particular texture map image for V2's head, for instance, you have made the choice between which overlapped set becomes relevant. In essence, you say "head vt's, meet head texture map (or any other map)." In my situation, the overlapping head vt's and body vt's are in the same box, all jumbled together and indistinguishable (without some hefty work - and still flawed as Ajax pointed out). I cannot select which set references which image since my app is not referencing an image from a particular set of vt's, but creating an image for all of the vt's (or all of those of interest to the user, at least). The alternative that would work would be to create a separate image for each occurrence of 'usemtl' or 'g' in the .obj file, but that would be costly and wasteful. Load the standard Vicky2 into UVMapper. My application creates texture maps from what you see. How do I create two required images from that (which is how Daz has set this up)? What distinguishing features tell me which set of facets must be sorted out to one map and not another? In Poser, you do this consciously by selecting an image to map onto the desired material region. ************ The use of the W is known to me, but I made this inquiry thinking that maybe Daz was using it nonstandardly to distinguish the different texture spaces used by simple differentiation of the values, which would have greatly simplified the situation. Now I am left with probably one viable alternative that is less fallible, less costly, and removes the need for not-so-extensible programming practice (hardcoding something which could change at any time), but requires the user to have some foreknowledge and do some extra work. That is to notify the user that if this kind of situation exists (overlapping texture spaces), they must select the 'regions' that make up a single texture space to create the texture map image for each overlapping texture space. In other words, Vicky2 would require the user to select the head and create a texture map image, then select the body and create another texture map image so that they do not end up in the same image (and overlapped). I hope that I haven't been to brash, here. It's just that the minefield was planted when I incorrectly expected the W coordinate to be my savior by possibly having a significance that it normally doesn't have. Instead, I should have inquired about the basics of multiple texture map images for a single .obj file and avoided the consequences. ;) At the least, this has spotted a potential brick wall early on in development of the application and allows some time to consider viable alternatives. Once again, thank you all for your exquisite inputs. Now, back to work! Kuroyume P.S.: I had the misfortune to be watching the news yesterday morning when the weatherperson starts discussing sunburn alerts. The sweeping title: "UV Skin-dex". Now, if that doesn't make you wonder...

C makes it easy to shoot yourself in the foot. C++ makes it harder, but when you do, you blow your whole leg off.

 -- Bjarne Stroustrup

Contact Me | Kuroyume's DevelopmentZone


Nance ( ) posted Tue, 09 July 2002 at 10:22 AM

sidenote: To work with the separate the materials for each of the two maps, try assigining each material to one of two UVMapper "Regions". You can then display/hide these regions one at a time to avoid the confusing overlapped display.


Spanki ( ) posted Tue, 09 July 2002 at 1:28 PM

Let me see if I can clear up a few things (without muddying them up more ;)...

First off, the 'regions' that UVMapper creates... these are a completetly arbitrary means of grouping vertices. A user may group any number of vertices from any section of the model into any number of regions... you can't rely on that grouping to 'mean' anything (ie. they don't mean one region is one map and one is another). The region info is written to the .obj file as #comments (which, btw, every other app in the world will delete when it writes the file back out).

UVMapper regions are there simply as an operational/interface use within the UVMapper program to provide a new way (layer) to group vertices. As Nance says, one use of them is to identify materials using different maps.

Kuroyume, If your application loads an .obj file (which has all the usemtl records), does it also load the .mtl file? If so, you can just use it as a guide... each material has the texture map file listed in that. Using that file as a guide keeps you from having to 'hard-code' any particular mapping... and even if it's missing, you could let the user select from several 'default' .mtl files, to keep things data-driven.

I've spent quite a lot of time researching the .mtl lib file format and there are some inconsistancies, but here's what I've found:

Ns = Phong specular component. Ranges from 0 to 1000. (I've seen various statements about this range (see below))
Kd = Diffuse color weighted by the diffuse coefficient.
Ka = Ambient color weighted by the ambient coefficient.
Ks = Specular color weighted by the specular coefficient.
d = Dissolve factor (pseudo-transparency). Values are from 0-1. 0 is completely transparent, 1 is opaque.
Ni = Refraction index. Values range from 1 upwards. A value of 1 will cause no refraction. A higher value implies refraction.
illum = (0, 1, or 2) 0 to disable lighting, 1 for ambient & diffuse only (specular color set to black), 2 for full lighting (see below)
sharpness = ? (see below)
map_Kd = Diffuse color texture map.
map_Ks = Specular color texture map.
map_Ka = Ambient color texture map.
map_Bump = Bump texture map.
map_d = Opacity texture map.
refl = reflection type and filename (?)

...I've also seen values of 3 and 4 for 'illum'... when there's a 3 there, there's often a 'sharpness' attribute, but I didn't find any explanation. And I think the 4 illum value is supposed to denote two-sided polygons, but I kinda get the impression that some people just make stuff up and add whatever they want to these files, so there could be anything in there ;). Of course Poser only writes out a few of the above and apparently ignores many of the above when reading as well.

Also for the Ns (OpenGL Shininess) value, OpenGL wants this range to be 0-128... one source said tha the wavefront spec was 0-1000 (and I've seen many .obj files with values in the 400-700 range), but the PolyTrans people say 0-200 and they've been refining thier converter program for over a decade (shrug). Of course Poser seems to use a range of 1-100, so I don't suppose you can ever get this value 'correct' for all apps. I haven't checked Bryce or other apps yet... since my app is mostly geared towards Poser, I decided to cap it at 100.

For the 'd' record (transparency), when I read this value in my program, I store it as the alpha value of the Diffuse component (which OpenGL uses) and just write that value back out from there on export.

Cheers,

  • Keith

Cinema4D Plugins (Home of Riptide, Riptide Pro, Undertow, Morph Mill, KyamaSlide and I/Ogre plugins) Poser products Freelance Modelling, Poser Rigging, UV-mapping work for hire.


Spanki ( ) posted Tue, 09 July 2002 at 1:41 PM

Oh, just to clear up something else.. ronstuff, you said: "So where does that THIRD piece of information in UV mapping come from? It is the "method" used to map any group of vertices (Planar XY, PlanarYZ, spherical cylindrical etc), and it can be stored with each vertex OR for a whole group of vertices in one reference, but don't confuse this with the "W" coordinate as that is something else (a 4th dimension, if you will - and it is sometimes used to describe "layers"). Some programs store this 3rd piece of UV location information (not the W value) with the NORMALS and some store it with the vertices, but it is always there somewhere, or the map just can't be applied with just the UV coordinates alone." ...I may be misreading that, but in fact, from an application's point of view, there is no 3rd piece of UV location information needed. The UV coordinates describe (translate) the 3-d vertex into a 2-d bitmap location... once I have a UV coordinate, I have all I need to texture that vertex. Of course for computational purposes, there may be a third piece needed to 'create' that UV coordinate, but once the UV coordinate has been created, that third piece of data is not written to the file anywhere... it's no longer needed. I hope this makes sense...

Cinema4D Plugins (Home of Riptide, Riptide Pro, Undertow, Morph Mill, KyamaSlide and I/Ogre plugins) Poser products Freelance Modelling, Poser Rigging, UV-mapping work for hire.


Privacy Notice

This site uses cookies to deliver the best experience. Our own cookies make user accounts and other features possible. Third-party cookies are used to display relevant ads and to analyze how Renderosity is used. By using our site, you acknowledge that you have read and understood our Terms of Service, including our Cookie Policy and our Privacy Policy.