Forum: Poser - OFFICIAL


Subject: Methods of redistributing modified Poser characters...

konan opened this issue on Dec 17, 2008 · 18 posts


konan posted Thu, 18 December 2008 at 3:58 PM

Quote - > Quote - Speaking of alternative UVs, I was wondering if it was possible to change texture coordinates on an actor from within Poser. If so, it should be quite easy to write a script that could hot-plug an alternative UV mapping. I'd be happy to write that script if someone could give me some pointers. Mostly, whether Poser Python can do it, and whether the .uvs format used by UVMapper is free to use and well-documented. If not, I'd have to come up with my own format and a tool to extract the UVs from an .obj file. No biggie, but of course using an existing format is usually preferable.

Actually it is possible to do this entirely in Poser's material room- no scripts needed. Basically, for any given point in map A, you need to know where to go in map B to get your pixel value. This requires two coordinates for each point in the UV map. Such information can be encoded in two channels of an RGB image, for example red and green.

Let's call this the texture converter map, or TCM. Using a Comp(onent) node, you can extract the RED value from the TCM and plug that into your texture map U_Offset. Similarly, you Comp the GREEN value from the TCM and plug that into your texture map V_Offset.

All that remains is to produce the TCM. I've been thinking about how to do this, but I haven't done any coding for it. I'm imagining a tool that shows you two image maps. You click on a reference point in A, and the corresponding point in B. If you do this for enough points, the tool could then interpolate all the coordinate transformations and generate the TCM.

Sounds like a good idea, but numeric resolution will no doubt stifle the attempt. You see, the red, green or blue channels can only have a value between 0 and 255. If we are attempting to use these color values as UV coordinates, then we will essentially divide the color channel (red) by 255 to get a value between 0 and 1. Now, with that being said, our values will be like 0, 1/255,  2/255, ... 254/255, 255/255. Therefore, the in values between will get snapped to the closest integer (divided by 255).  In other words, 1.4/255 would be the same as 1/255.

Now, if you used an RGBA image, you could RG for your U coordinate and BA for your V coordinate, hence increasing your resolution since it will be like 1/65535, 2/65535, etc.

eeek, math! ;)

Now for our purposes here, this would definitely be the 'hard way' of doing things, but I do like the idea. I could have other useful applications.

Konan