Forum Moderators: Staff
Poser Python Scripting F.A.Q (Last Updated: 2024 Sep 18 2:50 am)
Ahh, ok. That clarifies things a bit. So what I've been thinking/talking about all along does/would not produce the same results you were doing/planning.
With what I'm proposing, if a morph on Mesh A only moved 1 vertex, then the morph produced on Mesh B would only involve ideally one vertex, but maybe 3 or 4 vertices in total. This is what I mean by morph-matching, as opposed to shape-matching.
Whenever I'm saying 'morph delta', I mean quite literally that - the 'morph deltas' that you find in the .cr2 file that make up some particular morph (or find attached to the figure from python), not a computed delta for every vertex of the mesh.
Let's say you have a small cube and a large cube, identical in shape, but not size. Now the large cube has a morph that consists of moving one vertex (out of the 8 total), by +4 on the x-axis. My idea is that the resulting morph in the smaller cube would involve of exactly one vertex, moving +4 on the x-axis. So you've transfered the morph, but the size/position of the other 7 vertices on the small cube do not move at all.
It sounds like, using your old/current method, you'd end up with 2 identical cubes - yes? All 8 vertices would be morphed (by some computed delta amount). This is what I mean by 'shape-matching'.
The method I've been talking about could not, for example, make a morph that converted V4's face into a V3 default face shape (actually, it could, but you'd have to make a morph on V3 that started with the shape of V4 and ran backwards :) ).
Now, having said that, you could also use the vertex correlation data to do shape-matching (and that would be a valid user-option). In which case, you'd assume that the morph was already 'applied' to the source mesh (the script wouldn't need to know about the 'morph deltas' at all in this case) and you'd just compute deltas for every vertex of the destination/target mesh to try to make them match.
Sorry if I'm repeating myself, I just want to make the difference between the two approaches clear.
Cinema4D Plugins (Home of Riptide, Riptide Pro, Undertow, Morph Mill, KyamaSlide and I/Ogre plugins) Poser products Freelance Modelling, Poser Rigging, UV-mapping work for hire.
One more example, and then I'll go away :) ...
Let's assume we have V3 with a 'pointy ear' morph dialed in and the other mesh is Sydney.
If I understand correctly (and if it worked perfectly), using your old/current script, you'd end up with a "Sydney that looks like V3 with pointy ears".
The aproach I'm suggesting means that you don't 'dial in' the morph on V3 at all, you let the user select it from a list and you end up with "Sydney that looks like Sydney, but with pointy ears" .
Cinema4D Plugins (Home of Riptide, Riptide Pro, Undertow, Morph Mill, KyamaSlide and I/Ogre plugins) Poser products Freelance Modelling, Poser Rigging, UV-mapping work for hire.
*This is where Ockham's normalization routine cones in. The deltas are adjusted by both the source and the target.
I've typed up my project notes into (probably laughably incorrect) outline form. See the attached. Maybe it can help clarify a few ideas.
And I'll have to respond to everything else later. Blanged AOL.
===========================sigline======================================================
Cage can be an opinionated jerk who posts without thinking. He apologizes for this. He's honestly not trying to be a turkeyhead.
Cage had some freebies, compatible with Poser 11 and below. His Python scripts were saved at archive.org, along with the rest of the Morphography site, where they were hosted.
*"Sorry if I'm repeating myself, I just want to make the difference between the two approaches clear."
*No problem. Sometimes things have to be repeated to me a lot before I can get it. It seems like you're outlining the problems I was compalining about at the end of page 1 of this thread. Ockham pointed me toward a solution which modifies (normalizes, in his term) the deltas when they are passed from meshA to meshB. Besically, the process subtracts all the changes we don't want, compensating for the different base shapes of the two meshes, leaving us with corrected deltas for meshB.
(Ockham’s routine to normalize deltas):
Rosie and Paris
Rosie = distance between meshA center and meshA vertex coords
Paris = distance between meshB center and meshB vertex coords
meshB_delta = (meshA_delta/Rosie)*Paris
As I've been coding this, I've thought of the basic difference more along the lines presented in the morph transfer notes I posted above. A delta method works with existing morph deltas, never looking at the shape of the geometry when deriving the shape of the final morph. This is contrasted with the approach Ockham frequently uses, in which he actually changes the meshes in Poser, in the 3D view, then derives his morphs from the world vertex positions which result. His method may be a trick that is necessary to circumvent PoserPython's RAM leak, although I'm not sure of that.
Terminology isn't important to me, however. As long as it isn't standing in the way of actually communicating, any terms at all for any of these ideas are fine with me.
So, uh.... Have we communicated? Or have I missed the point again?
===========================sigline======================================================
Cage can be an opinionated jerk who posts without thinking. He apologizes for this. He's honestly not trying to be a turkeyhead.
Cage had some freebies, compatible with Poser 11 and below. His Python scripts were saved at archive.org, along with the rest of the Morphography site, where they were hosted.
Ok, thanks for the notes file - that helps a lot. As I mentioned earlier, I hadn't really followed/disected what you were already doing very closely, so this gives me a good overview.
From my brief look so far, it appears that none of that (including your notes on proposed variations) really includes what I've been thinking about and trying to convey :). I still need to dig up ockham's Rosie & Paris example (which script is that?), but I'm now at least familiar with his NoPoke code.
Anyway, at the risk of repeating myself (what's new? :)), let me try to outline the approach I've had in mind (or at lest the salient differences) in some way that makes sense (using your list of terminology, where possible).
There are some certain 'features' (properties?) of my method:
1. the resulting morph created in meshB, would only have delta values for the vertices within meshB that accomplished a "similar difference from the default meshB to the difference between the morphed and unmorphed meshA". In other words, if the morph is a "longer nose tip" morph, then there would only be deltas for the vertices in meshB that made up the tip of the nose.
It might be helpful to look at how a morph is stored in a .cr2 file. For example purposes, I recently made a "Chin Un-Cleft" morph for V4 and saved it out as a Pose Injection file...
<pre style="border-right:#000000 1px solid;padding-right:2mm;border-top:#000000 1px solid;padding-left:2mm;padding-bottom:2mm;margin:2mm;border-left:#000000 1px solid;padding-top:2mm;border-bottom:#000000 1px solid;background-color:#aefe50;">
{
version
{
number 4.01
}
actor head
{
channels
{
targetGeom PBMCC_34
{
name ChinUn-Cleft
hidden 0
keys
{
static 0
k 0 0
}
indexes 84
numbDeltas 15078
deltas
{
d 12496 0.00000 -0.00009 0.00000
d 12497 0.00000 -0.00007 0.00000
d 12499 0.00000 -0.00010 0.00000
d 12500 0.00000 -0.00025 0.00000
d 12501 0.00000 -0.00025 0.00000
d 12502 0.00000 -0.00013 0.00000
d 12503 0.00000 -0.00033 0.00000
d 12504 0.00000 -0.00034 0.00000
d 12505 0.00000 -0.00022 0.00000
d 12506 0.00000 -0.00030 0.00000
d 12507 0.00000 -0.00013 0.00000
d 12508 0.00000 -0.00011 0.00000
d 12774 0.00000 -0.00001 0.00000
d 12778 0.00000 -0.00004 0.00000
d 12863 0.00000 -0.00002 0.00000
d 12865 0.00000 -0.00007 0.00000
d 12866 0.00000 -0.00003 0.00000
d 12867 0.00000 -0.00004 0.00000
d 12868 0.00000 -0.00012 0.00000
d 12906 0.00000 -0.00003 0.00000
d 12907 0.00000 -0.00003 0.00000
d 12910 0.00000 -0.00003 0.00000
d 12933 0.00000 -0.00024 0.00000
d 12934 0.00000 -0.00020 0.00000
d 12935 0.00000 -0.00034 0.00000
d 12936 0.00000 -0.00030 0.00000
d 12937 0.00000 -0.00025 0.00000
d 12938 0.00000 -0.00035 0.00000
d 12939 0.00000 -0.00002 0.00000
d 12941 0.00000 -0.00005 0.00000
d 12942 0.00000 -0.00002 0.00000
d 12943 0.00000 -0.00007 0.00000
d 12944 0.00000 -0.00012 0.00000
d 12945 0.00000 -0.00014 0.00000
d 12946 0.00000 -0.00022 0.00000
d 12947 0.00000 -0.00005 0.00000
d 12948 0.00000 -0.00001 0.00000
d 12949 0.00000 -0.00014 0.00000
d 12950 0.00000 -0.00037 0.00000
d 12951 0.00000 -0.00038 0.00000
d 12952 0.00000 -0.00033 0.00000
d 12960 0.00000 -0.00025 0.00000
d 12961 0.00000 -0.00023 0.00000
d 12962 0.00000 -0.00016 0.00000
d 12963 0.00000 -0.00007 0.00000
d 12964 0.00000 -0.00001 0.00000
d 13640 0.00000 -0.00009 0.00000
d 13642 0.00000 -0.00007 0.00000
d 13643 0.00000 -0.00025 0.00000
d 13644 0.00000 -0.00013 0.00000
d 13645 0.00000 -0.00022 0.00000
d 13646 0.00000 -0.00011 0.00000
d 13647 0.00000 -0.00033 0.00000
d 13648 0.00000 -0.00030 0.00000
d 13910 0.00000 -0.00001 0.00000
d 13915 0.00000 -0.00004 0.00000
d 13999 0.00000 -0.00002 0.00000
d 14000 0.00000 -0.00004 0.00000
d 14001 0.00000 -0.00007 0.00000
d 14002 0.00000 -0.00012 0.00000
d 14004 0.00000 -0.00003 0.00000
d 14039 0.00000 -0.00003 0.00000
d 14041 0.00000 -0.00003 0.00000
d 14063 0.00000 -0.00024 0.00000
d 14064 0.00000 -0.00034 0.00000
d 14065 0.00000 -0.00020 0.00000
d 14066 0.00000 -0.00030 0.00000
d 14067 0.00000 -0.00002 0.00000
d 14068 0.00000 -0.00007 0.00000
d 14069 0.00000 -0.00005 0.00000
d 14070 0.00000 -0.00012 0.00000
d 14072 0.00000 -0.00002 0.00000
d 14073 0.00000 -0.00014 0.00000
d 14074 0.00000 -0.00022 0.00000
d 14075 0.00000 -0.00005 0.00000
d 14076 0.00000 -0.00014 0.00000
d 14077 0.00000 -0.00001 0.00000
d 14078 0.00000 -0.00037 0.00000
d 14079 0.00000 -0.00033 0.00000
d 14087 0.00000 -0.00025 0.00000
d 14088 0.00000 -0.00023 0.00000
d 14089 0.00000 -0.00016 0.00000
d 14090 0.00000 -0.00007 0.00000
d 14091 0.00000 -0.00001 0.00000
}
}
}
}
}
...(this is a .pz2 file, but the formatting is the same as a .cr2 file).
Here's the part we're interested in...
<pre style="border-right:#000000 1px solid;padding-right:2mm;border-top:#000000 1px solid;padding-left:2mm;padding-bottom:2mm;margin:2mm;border-left:#000000 1px solid;padding-top:2mm;border-bottom:#000000 1px solid;background-color:#feba50;">
<strong>indexes 84
numbDeltas 15078</strong>
deltas
{
d 12496 0.00000 -0.00009 0.00000
d 12497 0.00000 -0.00007 0.00000
...so in this case numbDeltas is the number of total vertices in the head mesh, BUT this morph is only made up of deltas for 84 of those (the 'indexes' value). So the talble listed below there makes up a 'sparse' table that only lists delta values for 84 vertices...
<pre style="border-right:#000000 1px solid;padding-right:2mm;border-top:#000000 1px solid;padding-left:2mm;padding-bottom:2mm;margin:2mm;border-left:#000000 1px solid;padding-top:2mm;border-bottom:#000000 1px solid;background-color:#feba50;">
deltas
{
d <strong>12496</strong> 0.00000 -0.00009 0.00000
d <strong>12497</strong> 0.00000 -0.00007 0.00000
d <strong>12499</strong> 0.00000 -0.00010 0.00000
d <strong>12500</strong> 0.00000 -0.00025 0.00000
d <strong>12501</strong> 0.00000 -0.00025 0.00000
...etc.
...the numbers in bold are the vertex indices that are involved in this morph.
So, if we're trying to transfer that morph from meshA to meshB, we'd expect that the morph would look similar in the .cr2 file (the number of vertices would likely be different, but we're looking for a 'sparse' table - not the entire list of vertices for that actor).
So, in Python-ease... when we're looking at morphs, you might have something like the following code:
<pre style="border-right:#000000 1px solid;padding-right:2mm;border-top:#000000 1px solid;padding-left:2mm;padding-bottom:2mm;margin:2mm;border-left:#000000 1px solid;padding-top:2mm;border-bottom:#000000 1px solid;background-color:#aefe50;">
#----------------------------------------------------------------------------
# walk through current actor's parameters, looking for morphs
#----------------------------------------------------------------------------
for parm in actor.Parameters():
if (parm.IsMorphTarget()):
#-------------------------
# ok, we found a morph
#-------------------------
try:
geom = actor.Geometry()
except:
# do nothing
pass
else:
if(not geom): #-- is this redundant?
continue
verts = geom.Vertices()
numVerts = geom.NumVertices()
for i in range(numVerts):
(deltaX, deltaY, deltaZ) = parm.MorphTargetDelta(i)
#-------------------------
# deltaX, deltaY, deltaZ contain the morph
# delta values for this vertex... for example
# purposes, we'll 'bake' the morph into the
# default mesh...
#-------------------------
mvert = verts[i]
mvert.SetX(mvert.X() + morphval * deltaX)
mvert.SetY(mvert.Y() + morphval * deltaY)
mvert.SetZ(mvert.Z() + morphval * deltaZ)
#-- yada, yada, other code below here, etc.
...but, as we've seen above, not every vertex in the actor necessarily has any 'delta' for any particular morph, so let's add a simple test...
<pre style="border-right:#000000 1px solid;padding-right:2mm;border-top:#000000 1px solid;padding-left:2mm;padding-bottom:2mm;margin:2mm;border-left:#000000 1px solid;padding-top:2mm;border-bottom:#000000 1px solid;background-color:#feba50;">
#----------------------------------------------------------------------------
# walk through current actor's parameters, looking for morphs
#----------------------------------------------------------------------------
for parm in actor.Parameters():
if (parm.IsMorphTarget()):
#-------------------------
# ok, we found a morph
#-------------------------
try:
geom = actor.Geometry()
except:
# do nothing
pass
else:
if(not geom): #-- is this redundant?
continue
verts = geom.Vertices()
numVerts = geom.NumVertices()
for i in range(numVerts):
(deltaX, deltaY, deltaZ) = parm.MorphTargetDelta(i)
#-------------------------------------------------
# deltaX, deltaY, deltaZ contain the morph
# delta values for this vertex... for example
# purposes, we'll 'bake' the morph into the
# default mesh...
#-------------------------------------------------
<strong>#-------------------------------------------------
# But, Only some verts have morph deltas... so
# we can test for non-zero values before doing
# anything with this vertex...
#-------------------------------------------------
if( deltaX or deltaY or deltaZ ):</strong>
mvert = verts[i]
mvert.SetX(mvert.X() + morphval * deltaX)
mvert.SetY(mvert.Y() + morphval * deltaY)
mvert.SetZ(mvert.Z() + morphval * deltaZ)
#-- yada, yada, other code below here, etc.
So, what the script would be doing is walking through the meshA actor until it found the approriate morph (my sample code above looks at all morphs on that actor). When it finds the right morph, it then walks through the meshA vertex list, getting the morph delta values. it's at that spot in the script (however you get to that point) that you'd want to skip all non-involved vertices (skip any vertices that don't have deltas for this morph).
The morph deltas (of meshA) are driving the creation of the morph in meshB... each time you find a vertex that has morph deltas, you call some routine or do whatever code is needed to find the corresponding vertex (or vertices) in meshB and create/update the morph deltas for that (or those) vertex/vertices only.
**
2**. The "whatever code is needed to find the corresponding vertex (or vertices) in meshB" mentioned above is basically what I outlined earlier in the thread, but could be some other variation like closest vertex (Cage), weighted average of close vertices (svdl), etc.
My proposed method sounds similar to what svdl suggested recently, but I look for intersections of rays cast out from each vertex position (btw, not the center of any mesh)** **in meshB along each vertex normal, with polygons in meshA.
Once you determine which polygon the vertex intersects, you update the list of vertex weights, for the vertices that make up that polygon for this vertex (each vertex in meshB can/would be associated with multiple vertices in meshA).
You'd have to put some thought into how to best organize the data, but basically, we're going to use this data to match against, in the code mentioned in #1 above when you find a vertex that has morph deltas. In fact, it probably makes sense not to create this table at all, but just compute the information on-the-fly at that point in the script (this let's us use some known screening information, described later...).
Just for completeness, the information we need is:
For each (morph-involved) vertex in meshA, we need the list of vertices in meshB that will be affected and a weight value to affect each one by. The weight value is computed... uhm, I forget :), but it's spelled out earlier in the thread and is based on 'where' that ray intersected with the polygon/triangle, relative to the distance to each vertex that makes up the triangle.
I hadn't really focused on it, but all of the above can be integrated with the octree code to reduce the number of polygons/vertices you search through, along with any other screening (normal direction tests, exclusion lists, etc). But see further discussion on this below...
3. Sorry for the disjunction in numbering, since this part is actually part of #1, but I wanted to set it up first :). Ok, so we now have:
**
meshA** - has morphs we want
meshB - will get morphs transfered to it
morphX - user-selected morph to transfer
vertWeightTable - the table computed in #2 above, containing the weighted vertex correlations
...so now we jump back into the code point mentioned in #1 above...
<pre style="border-right:#000000 1px solid;padding-right:2mm;border-top:#000000 1px solid;padding-left:2mm;padding-bottom:2mm;margin:2mm;border-left:#000000 1px solid;padding-top:2mm;border-bottom:#000000 1px solid;background-color:#aefe50;">
verts = meshAgeom.Vertices()
numVerts = meshAgeom.NumVertices()
for i in range(numVerts):
(deltaX, deltaY, deltaZ) = parm.MorphTargetDelta(i)
#-------------------------------------------------
# deltaX, deltaY, deltaZ contain the morph
# delta values for this vertex...
# But, Only some verts have morph deltas... so
# we can test for non-zero values before doing
# anything with this vertex...
#-------------------------------------------------
if( deltaX or deltaY or deltaZ ):
We're looping through the vertices of meshA, and looking at the morph deltas to see if this vertex has some (non-zero) delta value. If it does, then we loop through our vertWeightTable, looking for this vertex index. If there are any vertices in meshB that will be affected by it, we create/update a morph delta entry for that vertex in meshB...
vertexB_deltaX = vertexB_deltaX + (deltaX * weightAB)
vertexB_deltaY = vertexB_deltaY + (deltaY * weightAB)
vertexB_deltaZ = vertexB_deltaZ + (deltaZ * weightAB)
...where weightAB is the morph weighting computed for vertexA relative to vertexB (as described above and earlier in the thread, based on the ray intersection point, relative to the vertexA's that make up the triangle).
That the (long-winded version of the) broad picture :). I'm going to stop now before I totally blow up this "reply" software.
Cinema4D Plugins (Home of Riptide, Riptide Pro, Undertow, Morph Mill, KyamaSlide and I/Ogre plugins) Poser products Freelance Modelling, Poser Rigging, UV-mapping work for hire.
[continued]...
I forgot to mention the screening code part... basically, I had written all of that up, with the idea that you'd have this big honkin, multi-dimensional lookup table (the vertWeightTable thing). But as I was writing the #3 part, it became clear that I'd organized the table wrong, making it difficult to scan through it, so I went back and added the stuff about just computing the information on-the-fly...
Instead of creating the table, if you just do your vertex correlation code at the spot in the code where you need to find the matching vertex (or vertices), it's less hassle figuring out how to store it and there's another benefit - screening. Since the selected morph(s) are driving the process anyway, you only really need to look around the morphed area for matching vertices.
With that knowledge in hand, you do a quick pre-processing step.. scan through the morph delta list once and come up with a min/max (in all 3 dimensions) world-space vertex values that defines the bounding box. Then add some fudge factor to that (to pick up stray vertices around the edges).
Now you have the only bounding box you need to look at and it's relevent to the task at hand (ie. it's perfectly centered around the morph area in question). You can then do further screening based on exclusion lists, or whatever.
Cinema4D Plugins (Home of Riptide, Riptide Pro, Undertow, Morph Mill, KyamaSlide and I/Ogre plugins) Poser products Freelance Modelling, Poser Rigging, UV-mapping work for hire.
...another benifit of doing the vertex-correlation on the fly is that it pretty much eliminates all of your memory problems (!).
You might be thinking - "yeah, but it would be faster if I only had to compute the correlations once", but my contention (from recent posts above) is that I think you're going to have to re-compute them for each type of morph anyway (ie. nose, vs ear, vs lips... as the user re-positions the figures to line them up). Also, since you would now only be computing the matches for the local morph area, the savings should be huge, relative to other methods.
Again, non of this is relevent to a 'shape-matching' approach, only for a 'morph-matching' approach. If you want to make Syndey's head look like V3's head (or visa-versa), you need another system :).
Cinema4D Plugins (Home of Riptide, Riptide Pro, Undertow, Morph Mill, KyamaSlide and I/Ogre plugins) Poser products Freelance Modelling, Poser Rigging, UV-mapping work for hire.
Just some house-keeping... apparently when I was chopping up the example python code shown above, I chopped out the code where 'morphval' got assigned. In the original code, right after:
if (parm.IsMorphTarget()):
...there was another level of indentation that read...
#---------------------------------------------------------------------------------
# if morph target found, see if it's currently set to something besides zero
#---------------------------------------------------------------------------------
morphval = parm.Value()
if(morphval):
...I took it out to simplify the example but savvy readers may have wondered why the sample was using un-assigned variables :).
This has nothing to do with the discussion at hand, I just hate leaving things dangling like that. Just for completeness, I offer this freebie script, from which I chopped out that example:
<pre style="border-right:#000000 1px solid;padding-right:2mm;border-top:#000000 1px solid;padding-left:2mm;padding-bottom:2mm;margin:2mm;border-left:#000000 1px solid;padding-top:2mm;border-bottom:#000000 1px solid;background-color:#aefe50;">
...this script will 'bake' any active morphs (morph dials set to non-zero) into a figure's mesh. If you then save the .cr2 file (or export it's .obj file), you'd get a new .obj file with the new shape. This can be handy for using Poser magnets (or the new morph tool) to fine-tune some mesh you are creating, without going back to the modelling app.
Speaking of savvy readers... you might also note that this particular script didn't bother checking to see if the deltas exist for every vertex before using them... which means that the deltaX/Y/Z values might all be zero, but that doesn't hurt anything in this case.
Cinema4D Plugins (Home of Riptide, Riptide Pro, Undertow, Morph Mill, KyamaSlide and I/Ogre plugins) Poser products Freelance Modelling, Poser Rigging, UV-mapping work for hire.
Cage,
If you want to try the method outlined above, you should find that it drastically simplifies the entire process. You should be able to do a simple test-case shell code using your existing 'closest vertex' correlation code...
User specifies meshA and meshB
User specifies which actor has the morph in question
User specifies which morph to transfer
User moves the two figures around (use the BODY or hip) to line them up as near as possible in morphed area (but do not 'dial in' the morph in question, unless that happens to help line up the features)
User hits 'go' button
Script finds meshA
Script finds specified actor
Script finds specified morph
Script loops through morph deltas, building a min/max bounding box based on world-space values of verts in meshA that are involved in this morph (non-zero deltas)
Script adds fudge-factor to bounding box (some room around the edges).
Script loops through morph deltas again and...
for each non-zero deltaX/Y/Z
find closest matching (world-space) vert on meshB (using bounding box above for screening)
store deltaX/Y/Z from above as a delta for this vertex in meshB
create morph on meshB, using the data accumulated above
...done. As you see, this becomes very much simplified - no normalizing of vertices, no dealing with lots of tables of stored correlation data, etc. It's completely driven by the deltas that exist for any given morph and those same delta values are 'the answer' for the deltas to use on the other mesh as well.
Once you're satisfied that this is a viable approach, you should be able to 'plug in' new correlation methods as well as additional screening methods.
Cinema4D Plugins (Home of Riptide, Riptide Pro, Undertow, Morph Mill, KyamaSlide and I/Ogre plugins) Poser products Freelance Modelling, Poser Rigging, UV-mapping work for hire.
Sorry (I need to learn to think things through completely before posting :) )...
The above is not valid in the following way...
"find closest matching (world-space) vert on meshB (using bounding box above for screening)"
...what we really need (to make sure all affected verts in meshB actually get a delta assigned) is kind of the reverse. We still need to loop through meshB verts, looking for the closest match of meshA verts, instead of the other way around.
You could accomplish this in different ways, but one way would be that once you had your bounding box, you know which meshB verts are 'potentially' going to be affected by this morph. So you could loop through those verts, finding the closest matching meshA vertex. You could then loop through the meshA morph deltas and see if that meshA vert had non-zero deltas for this morph and if so, use that delta info for this meshB vert. If not, skip it.
The above means some nested looping, but you could fix that by creating some lookup tables if desired.
Cinema4D Plugins (Home of Riptide, Riptide Pro, Undertow, Morph Mill, KyamaSlide and I/Ogre plugins) Poser products Freelance Modelling, Poser Rigging, UV-mapping work for hire.
I'm a bit worried that I may have misunderstood again, but it also seems evident that I omitted a few ideas from my outline.
First, I see a definite benefit to screening out zero deltas. This will result in slimmer morph targets. I'm embarassed that I didn't think of that myself. But I don't understand where we'd get any greater benefit from such screening in terms of simplifying or speeding up the process. The current working vertex-to-vertex script will simply write zero deltas for meshB where it receives zero deltas from meshA. So when a nose morph is processed, we end up with a delta for every vertex (which should change), but most of those deltas don't affect the outcome beyond inflating the size of the targetGeom reference in the .cr2.
The morph with which I've been testing during development is a full-head character morph which moves 90% or more of the vertices in the Vicky 1 head. In the case of such a morph, screening out the zero deltas up front doesn't present as great a benefit as it would in the example of a mere nose morph. We still have to create deltas for 90%+ of the vertices in Vicky3/meshB (assuming the areas of relative density in both meshes are similar; a disproportionately dense back of the head in Vicky 3 would reduce the percentage, presumably).
This test case led me to conclude that one of the fundamental ideas of the script needed to be the use of data files to store correlated vertices. Since we don't know which correlations we'll need in any case and extreme cases (like my character morph) may require up to a full meshA to meshB comparison, the overall process makes more sense to me if we split it up. So I developed a method which allows comparison up-font of elements that will remain constant between the two actors.
To do this, I deviate from all of the examples I've seen by Ockham and others. Whereas those examples always seem to work with the world vertex positions of the actual actors in Poser, I decided to work with the source geometries for those actors. (I failed to clarify that in my notes.) The source geometries will remain constant. That's the fundamental difference in approach between the procedure I've tested and that used in NoPoke and elsewhere. NoPoke uses Set to change the actor geometry, albeit temporarily. Then it works with the world vertex "shape" of that geometry when it makes comparisons to develop the final morph. In that situation it seems to me that each morph will have to run a full comparison check independently, because the surfaces - the "shapes" - will differ in each case. There is no constant that can be used to speed up or ultimately simplify the process. Which is fine when you're just looking at an isolated area morph, like a nose, but not so helpful (as I understand it) when the entire head geometry needs to be considered. So I rejected this approach at the outset.
Unfortunately, this rejected approach may be necessary - if this is to be workable within the functional limits of PoserPython. My method of comparing the base geometries requires that I store a lot of information within Python for processing. This is apparently the reason for the RAM leakage. The world vertex approach (of NoPoke) allows certain data to be stored using Poser internal methods, by using the Set method to change the geometry. Python can then ignore that information until we need it, and there's no (or at least less) leakage. As far as I can tell. But to get this benefit, I need to accept the need to run every mesh comparison for every morph. Which doesn't seem feasible in all cases.
So my current problem is apparently one of either accepting RAM leaks, accepting the need for full mesh comparisons in each case, or stepping beyond PoserPython's limitations. If the RAM leaks are due to some other cause, this problem can be resolved (yet I lack the programming acumen to determine this). If there's a way to find some constant between the meshes when using the world vertex approach, this problem can go away. If anyone sees a way out of my dilemma, I'll be happy. :) I don't see one, however, which allows me to improve things while staying within PoserPython's limits. So I've been considering stepping outside of PoserPython. Is there anything else that can change that I'm not considering? If you see anything, please tell me! :)
Here's the thread link for Ockham's Rosie and Paris example.
http://www.renderosity.com/mod/forumpro/showthread.php?message_id=2863715&ebot_calc_page#message_2863715
===========================sigline======================================================
Cage can be an opinionated jerk who posts without thinking. He apologizes for this. He's honestly not trying to be a turkeyhead.
Cage had some freebies, compatible with Poser 11 and below. His Python scripts were saved at archive.org, along with the rest of the Morphography site, where they were hosted.
Quote - I'm a bit worried that I may have misunderstood again, but it also seems evident that I omitted a few ideas from my outline.
It's quite possible that I have misunderstood. I really should sit down and study your previous scripts - I just hadn't had the time to do so yet.> Quote - First, I see a definite benefit to screening out zero deltas. This will result in slimmer morph targets. I'm embarassed that I didn't think of that myself. But I don't understand where we'd get any greater benefit from such screening in terms of simplifying or speeding up the process. The current working vertex-to-vertex script will simply write zero deltas for meshB where it receives zero deltas from meshA. So when a nose morph is processed, we end up with a delta for every vertex (which should change), but most of those deltas don't affect the outcome beyond inflating the size of the targetGeom reference in the .cr2.
...if they are in fact zero, then I have misunderstood what your process was doing - sorry about that. The speed-up I refer to would come from not (pre)processing the entire actor/mesh, but only the parts of it involved in the morph, but more on this below...
Quote - The morph with which I've been testing during development is a full-head character morph which moves 90% or more of the vertices in the Vicky 1 head. In the case of such a morph, screening out the zero deltas up front doesn't present as great a benefit as it would in the example of a mere nose morph. We still have to create deltas for 90%+ of the vertices in Vicky3/meshB (assuming the areas of relative density in both meshes are similar; a disproportionately dense back of the head in Vicky 3 would reduce the percentage, presumably).
I agree with all of the above... with the added comment that I don't personally hold out much hope for doing such broadly scoped morphs in the first place - due to the differences in shapes as discussed earlier. If the morphs are generic enough in nature (make face taller, wider, etc) then they'd probably work fine, but if it does several more specific things (make nose longer, make ears pointy, push lips out, wrinkle forehead), then you've still got the dis-similar shape issue.
Quote - This test case led me to conclude that one of the fundamental ideas of the script needed to be the use of data files to store correlated vertices. Since we don't know which correlations we'll need in any case and extreme cases (like my character morph) may require up to a full meshA to meshB comparison, the overall process makes more sense to me if we split it up. So I developed a method which allows comparison up-font of elements that will remain constant between the two actors.
Again, I agree with that logic, except that it relies on the assumption that general and re-usable vertex correlations can be made programatically for future use.
Anyway, I think we're mostly communicating now. I was making some assumptions about what your script did, based on some of the images posted, but I'll take your descriptions above as gospel :).
So, let's get back to the basics... there are two primary issues that need to be resolved to transfer morphs between figures...
1. Mesh Topology Differences
This is a problem in determining how vertices from one mesh correlate to the vertices of the other (with a different mesh topology). Assuming the meshes were nearly identical in shape, but have different topologies - this would be a case like hi-res V3 vs lo-res V3 or the hi/lo Mikis that Joe is working on.
The "closest vertex" approach to solving this has piggy-back issues. The "vertex projection to plane resulting in weighted list of vertices" approach that I described back on page 2 should produce better results. And svdl's "weighted average" method wasn't spelled out much, but sounds like it would be better as well.
2. Mesh Shape Differences
Note that none of the above really address this issue at all. If the nose of one figure is near the lips of the other figure, then none of the above methods do anything to handle that - this has to be considered separately. So the question is how to handle it. And the answer is - I don't know :).
Without some MIT grad coming up with some heuristic algorythms that recognize amorphous 'features' of a mesh (nose, nostrils, lips, ears, etc), I don't know of any way to handle it purely programatically (let alone spell the words I just used above ).
You may or may not know that PhilC's WW also doesn't try to do this strictly programatically either. He has to sit down and produce the datasets for each new mesh needed to correlate the shapes in some way that's usable by his scripts.
As mentioned earlier, if the shapes are 'relatively close', then due to the nature of many organic morphs, you can probably still get decent results. But as you get more specific with the morphs, the closer the shapes are going to need to be.
You can either have the human operator position and shape the meshes to get them similar before running the script, or someone is going to have to sit down and do that for entire figures to build reusable datasets that can be loaded at runtime.
This was the point I was trying to make earlier... if you have the skills to morph one figure into the shape of another, then you should be skilled enough to make the morphs you need to start with :). It's kind of a catch-22 situation.
Cinema4D Plugins (Home of Riptide, Riptide Pro, Undertow, Morph Mill, KyamaSlide and I/Ogre plugins) Poser products Freelance Modelling, Poser Rigging, UV-mapping work for hire.
Okay. I hope we're getting to the same page. :-P This thread has been like that Fawlty Towers episode, Communication Problems. We're all speaking our own languages and talking past one another to some extent. The sad thing is that this problem actually follows me everywhere I go. Cage is not the brightest bulb in the lamp. :-P (As such, there's an inherent risk in taking anything I say as 'gospel'. I often confuse myself and get things backwards and sideways.)
I agree that there is potential in svdl's suggestion. I skimmed through his recent post and missed the fact that he proposes a weighting solution based on vertex comparisons, without polygon comparisons. The vertex-to-polygon comparisons seem to be the source of the RAM leak problems. I'm going to see if I can puzzle out svdl's weighting, then. Perhaps the vertex approach can be used, after all.
Both of the problems you mention are concerns, but they're things I'm trying not to think about too much until the basics are covered. There can be various ways of tweaking the process (through user interaction, screening methods, offset methods, or some way of defining certain "features" for the script) that can be worked in after the comparisons and morph creation have been ironed out. There's been much hypothetical discussion about the potential limits of a script that hasn't been written yet. :)
This isn't going to be perfect. I'm afraid it can't really reach my initial goal as well as I'd hoped, in the early stages. But I think the "uphill" problem can be solved using the weighting approach, and things can be refined to some degree after that. I'm really still kind of surprised that it works at all. I just hope it can be made to work for at least some situations beyond those tested by JoePublic.
===========================sigline======================================================
Cage can be an opinionated jerk who posts without thinking. He apologizes for this. He's honestly not trying to be a turkeyhead.
Cage had some freebies, compatible with Poser 11 and below. His Python scripts were saved at archive.org, along with the rest of the Morphography site, where they were hosted.
Wouldn't it be possible to use patterns ?
I think the problem is to try to swap morphs/shapes from one figure to another figure, BUT THE SCRIPT HAS NO CLUE HOW THE FIGURE IS EXACTLY SHAPED.
Of course one could manually input the exact position of the nose, moutch, ears, but as long as standard Poser meshes are used, why not use premade patterns ?
If the script had a set of premade patterns it could look up, like:
This is V3, and the pattern says that her upper lip are THESE 300 vertices.
And V3's eyes are THESE 500 vertices. And her nose is ...
And this is V1, And her upper lip are THESE 300 vertices...and so on.
And then analyze the morph it wants to transfer based on these patterns.
And then only transfers the differences found in figure A over to figure B.
Isn't that what Wardrobe Wizzard does ?
Using premade patterns for each known Poser character so it only has to transfer the differences in shape ?
Sorry, I'm no progammer, so just ignore this post if it doesn't make sense or if it has been already said before.
:unsure: :unsure: :unsure:
I'm not sure exactly what data WW uses, but yes, I'm sure that it's some form of pattern or hinting information.
Knowing which vertices make up some feature is part of the answer, but it still doesn't really tell you how to line them up if they are shaped differently. For example, just to start, suppose one nose is 2 inches lower than the other? So you also need to know some offset/center type information. But what if one nose has wide nostrils and the other has slim ones? Or one's nostrils go relatively more 'into' the head compared to the other ones that go relatively more 'up' into the nose? Or up-turned vs down-turned nose tip? Or is just 20% larger/smaller overall (messing up how things line up) ?
In general, noses are actually probably somewhat similar between meshes, but some other features will have more of these types of issues (ears in particular, but eyes and lips as well).
Cinema4D Plugins (Home of Riptide, Riptide Pro, Undertow, Morph Mill, KyamaSlide and I/Ogre plugins) Poser products Freelance Modelling, Poser Rigging, UV-mapping work for hire.
again, yes ww solved that. from furrette to the mill baby to the freak to apollo. my guess is that philc and kamilche figured out something fairly easy to do, or it would be a lot more than $5 a pop to add (some) new figures (some are free). and in the beginning, they added bunches of 'em for free. if you like, i'll have a go at converting the daz fantasy mask for v3, and post the results. but i know that it seems to matter how much detail a body part has. think hands were omitted for detail reasons, and i'm not sure of that facial features are part of the conversion process. that's one of the reasons i think talking to him might be beneficial. i think you guys might need to deal with things at a totally different level of detail and with a very different aim.
i know that as a user, the difficulty was in going from large to small. from things they said about adding apollo, i think the difficulty in scripting is a non-standard zero pose (like a versus t poses).
Quote - again, yes ww solved that. from furrette to the mill baby to the freak to apollo. my guess is that philc and kamilche figured out something fairly easy to do, or it would be a lot more than $5 a pop to add (some) new figures (some are free)...
I don't know if they are still doing this or what the current price is, but there's a fee to the developer to get your figure supported as well.
And you're right, I don't think facial details are included (not much reason to, relative to the task of that app).
Cinema4D Plugins (Home of Riptide, Riptide Pro, Undertow, Morph Mill, KyamaSlide and I/Ogre plugins) Poser products Freelance Modelling, Poser Rigging, UV-mapping work for hire.
true, but it's only $100 for the $5 addition by request. i think there are figures he just adds. that is, i'm not sure daz paid him to add v4 or the mill baby. unlike other figures, ww for v4 is more of a detriment to the market than a positive force to buy. in the support forums is a thread that's just a long poll of requested figures (some of which he's added). for instance, i find it dubious that the creator of eve4 paid anything for her addition. and i'm not even sure dacort has been around to even know about ww, let alone pay for natalia support.
look at the list of supported figures. granted, philc is a complete marvel at how much he does, but if it took a week to add each of those figures, i doubt it would have ever been released. and kamilche worked (and works) on making games at the same time.
let me put it another way: if you came up with an app that transferred morphs but supported 1/10 the number of figures ww does, people would probably still be interested.
Oh, I agree with everything you said there - I just wanted to point out that there were other sources of income related to that work (not in all cases). I didn't quote a price because as I mentioned, I wasn't sure it was the same in all cases, but it was higher than what you quoted when I asked (not long after it came out).
Cinema4D Plugins (Home of Riptide, Riptide Pro, Undertow, Morph Mill, KyamaSlide and I/Ogre plugins) Poser products Freelance Modelling, Poser Rigging, UV-mapping work for hire.
For defining features....
One might assume the tip of the nose is the point which lies furthest from the center on +z. One might look for a nostril material to locate the nose. One might have the user place a marker object. One would have to expand to define the nose, overall, beyond that....
For ears, look for clustered geometries on the side of a head which lie outside the basic shape defined by the scalp. More complicated than the tip of the nose, but it seems feasible....
For a mouth, look for lips, teeth, or tongue materials. Or look for the area on +z where the normals of neighbor vertices point toward one another on y. Or look for the place where the mesh turns inward upon itself.
Then if and when any of these are found, find min/max areas for them and calculate basic offsets to use when comparing the meshes. This could (if workable) remedy the worst concerns about incompatible shapes overall, but then there would still be variance within the defined features to be accomodated.
Complicated. I'm oversimplifying and being too optimistic, I know. :) Right now I'm struggling to understand how to get a vertex delta weight as an inversely proportional expression of one vertex's distance from another vertex. This project needs a math wiz.
===========================sigline======================================================
Cage can be an opinionated jerk who posts without thinking. He apologizes for this. He's honestly not trying to be a turkeyhead.
Cage had some freebies, compatible with Poser 11 and below. His Python scripts were saved at archive.org, along with the rest of the Morphography site, where they were hosted.
Okay. Muddling through the allusion to inverse proportionality in svdl's most recent post....
distance = distance between meshA vertex and meshB vertex
threshold = a constant
weight = threshold/distance
I think threshold might be able to be calculated as an expression of relative vertex densities between to octree regions. This is where I become uncertain. Perhaps the area of a region's bounding box divided by the number of vertices in that region? But then the comparison between the densities per region for the two meshes needs to be factored in. And I'm not convinced that this idea is sound at all, in the first place.
One could assign threshold as a true constant at the outset and not consider the mesh densities, but that seems like it would be too broad and wouldn't always apply in any given case. I assume the "constant" needs to vary with the densities of the meshes being compared. So a relative constant, if that's not an oxymoron. Hmm.
I'm not sure this would help for downhill, and I'm puzzled about implementation for uphill.
Any thoughts?
===========================sigline======================================================
Cage can be an opinionated jerk who posts without thinking. He apologizes for this. He's honestly not trying to be a turkeyhead.
Cage had some freebies, compatible with Poser 11 and below. His Python scripts were saved at archive.org, along with the rest of the Morphography site, where they were hosted.
No thoughts off-hand, but only because I'm kinda swamped with some contract work right now - sorry.
Just a quick thought... you might need to get the inverse of that weight above ( weight = 1 - weight ), so that if threshold distance is (arbitrary number) 20 and distance is 20, you'd get 1.0, (full weight), but you want zero weight for that one (it's at the maximum distance) - however that works into the averaging of multiple vertices.
Cinema4D Plugins (Home of Riptide, Riptide Pro, Undertow, Morph Mill, KyamaSlide and I/Ogre plugins) Poser products Freelance Modelling, Poser Rigging, UV-mapping work for hire.
...oh and I'm not sure you need to worry about densities per-se. Distance is distance (for the threshold value)... I think I'd just use some fixed number. Grab the closest (regardless of distance) and then factor in any others that 'might' fall within the threshold distance.
Cinema4D Plugins (Home of Riptide, Riptide Pro, Undertow, Morph Mill, KyamaSlide and I/Ogre plugins) Poser products Freelance Modelling, Poser Rigging, UV-mapping work for hire.
...then again, my personal favorite is the ray/poly intersection code to determine weighting of those vertices, so I hadn't given this other idea a lot of thought yet :).
Cinema4D Plugins (Home of Riptide, Riptide Pro, Undertow, Morph Mill, KyamaSlide and I/Ogre plugins) Poser products Freelance Modelling, Poser Rigging, UV-mapping work for hire.
I like the ray-poly idea, too. But unless the memory leak can be plugged or avoided, I can only implement that by stepping outside of PoserPython or revising the whole concept to use the Set and WorldVertex approach. Both possibilities, but this idea should be tested first since it doesn't require revising major elements in the existing code.
I'm fairly certain what I mention above is terribly incorrect. I wish I were able to visualize things like this.
In thinking about the "pattern recognition" problem of identifying certain features, I end up thinking about ways of dealing with mazes in game programming. A path through a mesh would be a lot like a path through a 2D maze in a video game. Could the A*algorithm or some fractal-derived method help? No idea. The math, again, is over my head....
===========================sigline======================================================
Cage can be an opinionated jerk who posts without thinking. He apologizes for this. He's honestly not trying to be a turkeyhead.
Cage had some freebies, compatible with Poser 11 and below. His Python scripts were saved at archive.org, along with the rest of the Morphography site, where they were hosted.
Just curious, but why would you need to 'Set' any vertex positions? (I assume that's what you meant by that)
But, as an aside, my assumption is that you're probably going to need to end up using the WorldVertex positions anyway (unless you come up with some automated way of lining up the meshes - for eg. Stephy Petite vs V3 relative head-heights).
Cinema4D Plugins (Home of Riptide, Riptide Pro, Undertow, Morph Mill, KyamaSlide and I/Ogre plugins) Poser products Freelance Modelling, Poser Rigging, UV-mapping work for hire.
I spoke a bit too loosely in referring to the "Set" method. I was thinking of the SetX/Y/Z WorldVertex approach used in NoPoke and elsewhere. The whole quandary I was trying to explain yesterday. I can use the source geometry or the actor WorldVertices. I can use vertex comparisons or vertex-to-polygon comparisons. The vert-to-poly comparisons currently result in a memory leak, the cause of which is still uncertain. Using the NoPoke approach may remedy that. Maybe not.
I have some Big Questions right now which keep getting lost in discussions of theory concerning the details of comparing meshes with incompatible shapes. I'm not surprised my questions are the least interesting in the thread. :-P Unfortunately, they're also the questions most pertinent to the actual script, until and unless someone else takes over. :( If I can't resolve some of these questions, I can't move the project forward to the point where details of mesh comparison will even be an active consideration.
Right now the two biggest questions are: why does this RAM leak occur, and what can done about it? And: is there a way to calculate vertex weights using vertex-to-vertex comparisons, so as to avoid the RAM leak which accompanies the vert-to-poly comparisons?
It turns out that Walk This Way was taken over by svdl and the version which encounters memory leaks was his, not Ockham's. So I'm trying to reach svdl to ask about the RAM leak issues. His seems to be the only script in Poserdom to have had this same basic problem, and I hope he may have some understanding of the leak situation. Ockham informed me that it's quite hard to make Python leak memory. Hmm.
I'm trying the Spanki weight-calculating method for triangles within the context of svdl's suggestion about weighting using a vertex approach. I've tested using a spherical range check to find MeshA verts which should be weighted for a given meshB vert. The radius of the sphere is substituted for the triangle edge in the Spanki code. So far, this works to give me weights, but it is obviously incorrect, returning no correlations in many cases and returning groups of correlations which have a total weight above or below one in many other cases. But something along these lines may go somewhere. Some of the testing results are a bit puzzling, so far. The same loop structure of each meshB vert checking all meshA verts takes much longer in some cases, depending on how the sphere radius/threshold is set.
===========================sigline======================================================
Cage can be an opinionated jerk who posts without thinking. He apologizes for this. He's honestly not trying to be a turkeyhead.
Cage had some freebies, compatible with Poser 11 and below. His Python scripts were saved at archive.org, along with the rest of the Morphography site, where they were hosted.
"For defining features....
One might assume the tip of the nose is the point which lies furthest from the center on +z. One might look for a nostril material to locate the nose. One might have the user place a marker object. One would have to expand to define the nose, overall, beyond that....
For ears, look for clustered geometries on the side of a head which lie outside the basic shape defined by the scalp. More complicated than the tip of the nose, but it seems feasible....
For a mouth, look for lips, teeth, or tongue materials. Or look for the area on +z where the normals of neighbor vertices point toward one another on y. Or look for the place where the mesh turns inward upon itself.
Then if and when any of these are found, find min/max areas for them and calculate basic offsets to use when comparing the meshes. This could (if workable) remedy the worst concerns about incompatible shapes overall, but then there would still be variance within the defined features to be accomodated."
Forgive me if it doesn't make sense, because IANAP (I Am Not A Programmer), but this sounds very complicated.
What I think is needed is a datafile for the face of each known Poser mesh that the script could look up.
Where every part of a face (ear, nose, eyebrow, chin, cheek, etc)
is linked to a group of vertices.
So if you want to transfer a morph the procedure would go ike this:
Make datafiles of the most common Poser meshes that define the face parts: Where are they placed, what vertices do they consist of ? (Could this be done by regrouping the head and splitting it up into a dozend or more sub-parts, one for each feature of the face ?)
Then:
This would transfer the face parts in mesh B into the (relative to the rest of the face) same position as found in mesh A.
The faces might still look a bit different because other features not stored in the datafile (EXACT shape of cheeks, chin, browbone, noseridge etc) might still be different.
To improve it, you could move and resize the body of mesh B according to the datafile so that both heads are the same position and have the same size.
Then you can run your original script and try to finish the morph.
Again, if this doesn't make sense from a technical point of view,
I appologize in advance !
I was musing about ways to possibly automate an analysis of the mesh to try to locate features instead of spelling them out for the script. The datafiles only contain correlated vertices. Unless the process were to be fundamentally changed, the identification of features would be part of the processing which would precede the creation of the datafiles.
That said, I don't necessarily think about any of this terribly effectively. I bumbled into this and I'm trying to make it work as well as I can. No one else seems to be trying his hand at any similar concept. My own programming skill is quite limited, sadly.
The approach you suggest is somehow similar to that used by Wardrobe Wizard, is that correct? I'm not keeping all these sub-threads straight very well. :) Maybe PhilC has been reading all of this, and he'll think about trying to add such capabilities to WW....
===========================sigline======================================================
Cage can be an opinionated jerk who posts without thinking. He apologizes for this. He's honestly not trying to be a turkeyhead.
Cage had some freebies, compatible with Poser 11 and below. His Python scripts were saved at archive.org, along with the rest of the Morphography site, where they were hosted.
Quote - The vert-to-poly comparisons currently result in a memory leak, the cause of which is still uncertain.
Yeah, but isn't the memory leak due to trying to store lots of lists of data about the polygons/edges? It seems like you could just compute that info on the fly as you needed it.
Quote - I have some Big Questions right now which keep getting lost in discussions of theory concerning the details of comparing meshes with incompatible shapes. I'm not surprised my questions are the least interesting in the thread. :-P Unfortunately, they're also the questions most pertinent to the actual script, until and unless someone else takes over. :( If I can't resolve some of these questions, I can't move the project forward to the point where details of mesh comparison will even be an active consideration.
Right now the two biggest questions are: why does this RAM leak occur, and what can done about it? And: is there a way to calculate vertex weights using vertex-to-vertex comparisons, so as to avoid the RAM leak which accompanies the vert-to-poly comparisons?
Yeah, sorry for all the side-tracking. But you seem to be trying to hang on to / salvage a method that's causing you troubles. I still need to get the time to study the existing scripts better to see where/why the storage is needed, but personally, I think I'd just try to compute the data as needed and not worry about storing it.
I understand that you'd like to preprocess the meshes and come up with a reusable vertex correlation using some automated process, but (from my perspective) I've already discounted that as a valid thing to do (given my math abilities and due to shape differences). Or at least... wait until you have some methodology for solving the differences issue - as mentioned earlier, none of the vertex-correlation methods being discussed really do anything to address the larger shape differences issue.
Quote - I'm trying the Spanki weight-calculating method for triangles within the context of svdl's suggestion about weighting using a vertex approach. I've tested using a spherical range check to find MeshA verts which should be weighted for a given meshB vert. The radius of the sphere is substituted for the triangle edge in the Spanki code. So far, this works to give me weights, but it is obviously incorrect, returning no correlations in many cases and returning groups of correlations which have a total weight above or below one in many other cases. But something along these lines may go somewhere.
I've thought about that a little bit, but don't have any answers yet. The method I proposed always gives you 1.0 for the sum of the weights for the 3 vertices of the triangle. But if you're just grabbing N close vertices (within some radius), you lose the context of the surface above the point being checked, so the math at least is different (I need to think about how you can do the averaging). As you noted, doing a simple average based on the number of vertices within range doesn't work.
Cinema4D Plugins (Home of Riptide, Riptide Pro, Undertow, Morph Mill, KyamaSlide and I/Ogre plugins) Poser products Freelance Modelling, Poser Rigging, UV-mapping work for hire.
*"But you seem to be trying to hang on to / salvage a method that's causing you troubles."
I tend to perseverate with regard to an idea once I've started with it. It's one of my more frustrating traits, I'm afraid. My apologies. You may be right. It looks increasingly like some of my premises need to change in order to pursue this. But I haven't been able to fully test a few ideas which seem to me like they deserve testing before I completely change tracks.
*"Yeah, but isn't the memory leak due to trying to store lots of lists of data about the polygons/edges? It seems like you could just compute that info on the fly as you needed it."
It doesn't look like that's the problem. It may or may not contribute to the problem. I tested computing the polygon data on the fly, and ended up with both a massive slowdown and a memory leak. There are a couple of lists which can't be calculated on the fly. Unless they're the cause of the remaining leak, then the lists aren't really the root cause. Looking at svdl's Walk This Way, it looks like he's encountered the same basic memory issues without using anything like my lists, and he seems to have concluded that the problem is actually Poser. So it looks like the polygon method is beyond my skills - unless someone understands the memory problem (svdl is the only one I know to have encountered it, but he hasn't responded yet to my IM) and can explain it to me or I move away from PoserPython and its apparent limitations. That's a big change, and I'm not prepared to make it until I have a better grasp of what's happening. Hence, my frustration that the focus on the memory problems keeps being lost.
So the polygon issue is stalled due to memory issues. The vertex approach is probably inadequate, but I need to be sure of that by figuring out whether svdl's (apparent; it's rather ambiguous, really) suggestion in his latest post can prove useful to produce weighting using vertex-to-vertex comparisons.
It's uncertain whether shifting from the source geometry approach to a world vertex approach would rescue the polygon method from the memory issues. Walk This Way seems to use the world vertex methods, but it still leaks. Of course, it's also a very different script. The change in procedure would introduce a host of new complications and could remove any hope of avoiding the need to do a full geometry comparison for every morph target. I'm saving this change as a last resort, just short of leaving PoserPython. (Actually, leaving PoserPython is a very appealing idea. Poser isn't terribly friendly and its version of Python lacks many of the options of later Python versions and has more bugs....)
I really do appreciate everyone's help. I understand that forum threads tend to drift. But I'm trying to be focused and methodical. It's the way I have to work in order to compensate for my own limitations, which are legion. :-P Once again, this basic concept seems like it deserves a more qualified programmer.
===========================sigline======================================================
Cage can be an opinionated jerk who posts without thinking. He apologizes for this. He's honestly not trying to be a turkeyhead.
Cage had some freebies, compatible with Poser 11 and below. His Python scripts were saved at archive.org, along with the rest of the Morphography site, where they were hosted.
Understood :). BTW, I've been playing with the latest script you posted and have at least one comment.... if the printout for the regions of missed matches is correct, then your find_regions() routine appears to be broken.
I'm still trying to track it down, but it's reporting vertices that live in the back/top/left trying to match with regions in the front/right/bottom (it always seems to be polar opposites).
Cinema4D Plugins (Home of Riptide, Riptide Pro, Undertow, Morph Mill, KyamaSlide and I/Ogre plugins) Poser products Freelance Modelling, Poser Rigging, UV-mapping work for hire.
In particular, this code here looks suspect:
if vx < limits[0]: row = r[1][0]
elif vx >= limits[1]: row = r[1][0] #row is inclusive to right
if vy <= limits[2]: col = r[1][1]#col is inclusive toward bottom
elif vy > limits[3]: col = r[1][1]
if vz < limits[4]: depth = r[1][2]
elif vz >= limits[5]: depth = r[1][2]#depth is inclusive toward front
...just taking the X test case, it looks like if x < this_region_minX, then the row is set to this one, otherwise if x >= this_region_maxX, then the row is also set to this one. The problem is, one or the other of those cases will always be true - ne? Or am I missing some subtlety of what you're doing there?
Cinema4D Plugins (Home of Riptide, Riptide Pro, Undertow, Morph Mill, KyamaSlide and I/Ogre plugins) Poser products Freelance Modelling, Poser Rigging, UV-mapping work for hire.
...and actually, I must be missing something, because the above interpretation couldn't be right. We need to check if x < maxX and >= minX for a hit, instead of the other way around.
Cinema4D Plugins (Home of Riptide, Riptide Pro, Undertow, Morph Mill, KyamaSlide and I/Ogre plugins) Poser products Freelance Modelling, Poser Rigging, UV-mapping work for hire.
...or maybe I'm not seeing things . Try this code in there instead:
if vx > limits[0] and vx <= limits[1]: row = r[1][0] #row is inclusive to right
if vy >= limits[2] and vy < limits[3]: col = r[1][1]#col is inclusive toward bottom
if vz > limits[4] and vz <= limits[5]: depth = r[1][2] #depth is inclusive toward front
...it's possible that the comments are no longer true (I didn't double check that), but this seems to at least get the missed matches in the right sectors.
Cinema4D Plugins (Home of Riptide, Riptide Pro, Undertow, Morph Mill, KyamaSlide and I/Ogre plugins) Poser products Freelance Modelling, Poser Rigging, UV-mapping work for hire.
...also, the reason it's mising some matches is that the polygons it needs to match against don't live in the regions in question... there's a case where all the vertices of the polygon can live in other regions, but the surface still spans across it, which is keeping the polygon from being listed in all the regions it occupies. I hadn't thought up the fix yet though :).
There's another case where some regions don;t have any polygons assigned to them... possibly for similar reasons.
Cinema4D Plugins (Home of Riptide, Riptide Pro, Undertow, Morph Mill, KyamaSlide and I/Ogre plugins) Poser products Freelance Modelling, Poser Rigging, UV-mapping work for hire.
Which script are you examining? The one I posted most recently was the WIP test for polygon intersection, with the memory leak. That one?
The region code would have been the same in either case.
Edit: Okay... if you're getting the printout at the end, it's the WIP poly script you're testing. If you change the argument for the variable "testing" to 1 when you run the main function, at the bottom of the script, it will create a box at every point of successful intersection. Useful for visualizing when you're a bit slow, like me.... :)
For the polygon regions, I've tried to implement svdl's suggestion that vertices be used as the point of reference for placing polys in a region. The result, if I recall correctly, is that polygons can end up listed in multiple regions, along the borders. The polygon region list is built from the vertex region dict. (The dict was to become a list, but the memory leak problem broke before I got to that. I tested a list instead of a dict in one version, but that version was a mess with various tests for the leak built in, and the dict ended up in the posted code.)
I'm glad you've caught this, with the regions. There were no errors in the vertex comparison script as a result of what you report, so I never even checked for such a thing....
I don't think vertices alone will work to create weights. If it can work, I have no idea how. There doesn't seem to be enough information available without using the polygons as a point of reference. Someone please - prove me wrong. :)
Thanks for checking this, Spanki....
===========================sigline======================================================
Cage can be an opinionated jerk who posts without thinking. He apologizes for this. He's honestly not trying to be a turkeyhead.
Cage had some freebies, compatible with Poser 11 and below. His Python scripts were saved at archive.org, along with the rest of the Morphography site, where they were hosted.
Dont't know exactly, but perhaps the following may be a hint regarding your memory problem:
*Warning: Keeping references to frame objects, as found in the first element of the frame records these functions return, can cause your program to create reference cycles. Once a reference cycle has been created, the lifespan of all objects which can be accessed from the objects which form the cycle can become much longer even if Python's optional cycle detector is enabled. If such cycles must be created, it is important to ensure they are explicitly broken to avoid the delayed destruction of objects and increased memory consumption which occurs.*Though the cycle detector will catch these, destruction of the frames (and local variables) can be made deterministic by removing the cycle in a finallygc.disable(). For example: clause. This is also important if the cycle detector was disabled when Python was compiled or using
<em>def handle_stackframe_without_leak():<br></br> frame = inspect.currentframe()<br></br> try:<br></br> # do something with the frame<br></br> finally:<br></br> del frame</em><br></br>
Every time a function is called, a "frame" is build for tracebacks and such.
It is possible that CP's programmers changed how garbage collection works, because memoryleaks are not a common problem with Python > 2.
You can find the original documentation here (Python 2.4).
Documentation for 2.2 does not contain this information (just change the version number in the URL to 2.2, 2.3).
Cage, yes - correct on all counts. The code is building lists of polygons that live in some particular region based on whether any vertices of a polygon reside there. This does allow the polygons to be listed in more than one region. The problem is that a polygon (say a triangle) vertices might live in region 0, 1 and 4 (assuming a division of 2, with regions numbered 0-7). That polygon might well span across the corner of region 5, but it won;t end up in region 5's polygon list, since none of it's vertices live there.
The only way I see to fix this off-hand is to test the edges of the polygons, instead of the vertices.
Cinema4D Plugins (Home of Riptide, Riptide Pro, Undertow, Morph Mill, KyamaSlide and I/Ogre plugins) Poser products Freelance Modelling, Poser Rigging, UV-mapping work for hire.
Spanki, I plugged in your code and immediately got errors due to ungrouped vertices. :-P That may reflect differences in the testing geometries we're using. I've added features to help me track what's happening with the grouping. All of the vertices are being grouped and none are being multi-grouped. All of the polygons are being grouped and those along the octree boundaries are being multi-grouped, except in certain cases. The attached script will color the target geometry's polygons by region assignment. Multi-grouped polys are given a separate color. I've been testing with a Poser ball prop as meshA and a Poser ball prop which has been smoothed once in Wings 3D as meshB. I've also plugged in the code you advised, but it's commented out right now while I try to figure out why it errors.... And, ah, there's a lot of junk in there, mostly commented out, which is left over from fighting the memory leak. Sorry about the mess....
So perhaps if I add the polygon edge list into the sort when filling the polygon region list, we'll get better results. I'll look at that. Thank you.
===========================sigline======================================================
Cage can be an opinionated jerk who posts without thinking. He apologizes for this. He's honestly not trying to be a turkeyhead.
Cage had some freebies, compatible with Poser 11 and below. His Python scripts were saved at archive.org, along with the rest of the Morphography site, where they were hosted.
This is good. Thank you.
I have a couple of questions.
"the intersection/bounds checking code doesn't really distinguish the direction of the ray being cast" - Are you suggesting that it needs to? Or is this a moot point once the region code is repaired?
"if you're not using the center version, then the direction vector ('ang') needed is just the vertex normal" - So the normal vector itself serves in place of the line? Nothing needs to replace the removed calculations? (I've never been able to puzzle out how a vector can be equivalent to a point in some regards, yet possess direction or magnitude.... Cage is a bit slow.)
You did quite a bit. Golly. Thank you. I'll try to measure up to your additions. (Try to, yes....) :)
===========================sigline======================================================
Cage can be an opinionated jerk who posts without thinking. He apologizes for this. He's honestly not trying to be a turkeyhead.
Cage had some freebies, compatible with Poser 11 and below. His Python scripts were saved at archive.org, along with the rest of the Morphography site, where they were hosted.
Quote - "the intersection/bounds checking code doesn't really distinguish the direction of the ray being cast" - Are you suggesting that it needs to? Or is this a moot point once the region code is repaired?
Possibly moot, once repaired, but I need to think about that more.
Quote - "if you're not using the center version, then the direction vector ('ang') needed is just the vertex normal" - So the normal vector itself serves in place of the line? Nothing needs to replace the removed calculations? (I've never been able to puzzle out how a vector can be equivalent to a point in some regards, yet possess direction or magnitude.... Cage is a bit slow.)
Yeah, it's not always clear whether a 'vector' is actually being used as a vertex/point or as a direction/normal vector...
To cast a ray, you need 2 things - a starting point and a direction vector. To get the direction vector (for a line), you subtract the starting point from the ending point and then 'normalize' the result - divide each axis by the magnitude (distance/length) of that line - this produces what's commonly known as a 'unit vector' (Normals are unit vectors).
Unit vector's have the defining characteristic that thier 'length' is 1.0, so they describe a direction only. If you want a line starting from vertex startV, with a length/distance/magnitude of length along the direction/unit vector angle, you can get the end point of the line with:
endV = startV + (length * angle)
So, if we look back at that code, you can see what's ultimately happening with 'ang' down below there a bit...
point = [(ang[0]*intersect)+v[0],(ang[1]*intersect)+v[1],(ang[2]*intersect)+v[2]]
...v is the startV point of the line, intersect is actually the length/distance of the line and ang is the unit/direction vector being used to come up with point, which is the endV point of the line.
In the case of casting a ray out from the vertex, the code is doing some redundant stuff to produce what is essentially the normal for that vertex...
v = coord_list(geom2,vi) # get vertex
vn = normals_list(geom2,vi) # get normal
vnv = vecadd(v,vn) # add normal to vertex (effectively the end point of a line from v to vnv along it's normal, with a length of 1.0)
dist = point_distance(v,vnv) # now compute distance of that line (it'll be 1.0)
ang = [(v[0]-vnv[0])/dist,(v[1]-vnv[1])/dist,(v[2]-vnv[2])/dist] # and 'normalize' the vector by dividing each axis by the distance
...and voila! - you're back where you started from - with the normal for that vertex :).
If you are casting the ray out from the center of the mesh, then you're not actually using the vertex's normal, so you create a new angle/vector starting at the center and just passing through the vertex. You can see similar code being used to determine that angle.
Cinema4D Plugins (Home of Riptide, Riptide Pro, Undertow, Morph Mill, KyamaSlide and I/Ogre plugins) Poser products Freelance Modelling, Poser Rigging, UV-mapping work for hire.
Quote - Spanki, I plugged in your code and immediately got errors due to ungrouped vertices. :-P That may reflect differences in the testing geometries we're using.
I forgot to address this... you need to 'mixboxes' so all of the vertices can get grouped. Because of the way the old code worked (grouped vertices when they DIDNT match the conditions), they pretty much always ended up in 'some' region (the wrong one :)).
Cinema4D Plugins (Home of Riptide, Riptide Pro, Undertow, Morph Mill, KyamaSlide and I/Ogre plugins) Poser products Freelance Modelling, Poser Rigging, UV-mapping work for hire.
Not too bad (I used two identical meshes) - it found roughly 900 out of 1200 matches, but some of them were false matches (this is with the vertex/edge collision code in check_bounds() commented out, btw). Also of note here is the multi-grouped polygon display (hot pink)... note the strip of polygons to the right of center ended up in more than one group, but the ones to left of center did not (since the center vertices don't currently belong to more than one region).
You 'could' fix it by allowing the vertices to live in multiple regions, but that doesn't account for the region-spanning polygons I mentioned earlier. The better fix is still going to be using the edges in the region test, instead of the vertices.
Back to the earlier question of whether it matters if the intersection/bounds_check code worries about the direction of the rays being cast... having the region-culling in place takes care of most of that problem, but there can be some 'local' (to a region) problems as well. I think you need to let it search both directions along the ray (in case one mesh is bigger of smaller than the other), but some additional tests will need to be included to eliminate some of the false hits taking place.
normal direction test
I'd check to make sure that the polygon in question was facing the general direction as the vertex normal. You can do that by checking the Z component of the polygon normal against the Z component of the vertex normal - they should both either be positive or negative.
multiple hits
Currently, the first 'hit' breaks out of the loop. What I'd do is just note the distance of that hit and keep looking. Once you have all the hit polys and the distances, use the closest one (that also passes the above direction test).
...it's possible that the first test would eliminate the need for the second test.
Cinema4D Plugins (Home of Riptide, Riptide Pro, Undertow, Morph Mill, KyamaSlide and I/Ogre plugins) Poser products Freelance Modelling, Poser Rigging, UV-mapping work for hire.
Svdl's suggestion to compensate for polygons at region edges (back on page 1) was to extend the range of the regions to include the polygons. I'm trying to do this by calculating the polygon regions using a vertex regions dict derived from octree regions which have been extended, then. Unfortunately, I'm getting funny results with my box enlargement code. What I had was wrong when the box isn't centered at (0,0,0). Properly scaling apparently involves vector multiplication by a scalar value, but when I do that I end up with strange offsets in the resulting scaled regions.
That is, instead of each region being scaled up until the edges overlap, the entire box is scaled up and positioned on y with the min at y = 0. I get too easily confused when trying to visualize math....
I'm going to do some searching online to find scaling math. This is pitiful. :-P
Did you note any RAM leaks when running the above test? If not, what version of Poser are you using? If so, how dramatic was the leak?
===========================sigline======================================================
Cage can be an opinionated jerk who posts without thinking. He apologizes for this. He's honestly not trying to be a turkeyhead.
Cage had some freebies, compatible with Poser 11 and below. His Python scripts were saved at archive.org, along with the rest of the Morphography site, where they were hosted.
Take a look at this image. It's meant to depict the top view of just one polygon defined by the vertices A, B, C and D. The squares outside of that numbered 0, 1, 4 and 5 represent the 8 regions (again, viewed from the top, so regions 2, 3, 6 and 7 are not shown).
With the current code, A would be in region 0, B would be in region 1, both C and D would be in region 4. So this polygon would be listed in each of those regions as well - 0, 1 and 4. But as you can see, the polygon spans across region 5, but since it's not included in region 5's polygon list, no vertices from the other mesh in region 5 will bother testing it == missed match.
If you 'simply' expand the regions, you'd only be guessing how much you need to expand them in order to catch this situation. The best way to acount for it is to test the edges of the polygon (the edge from D->B would get a hit for region 5).
So you might want to figure out your scaling problem for acedemic reasons, but it won't be a good solution to the problem :).
Cinema4D Plugins (Home of Riptide, Riptide Pro, Undertow, Morph Mill, KyamaSlide and I/Ogre plugins) Poser products Freelance Modelling, Poser Rigging, UV-mapping work for hire.
This is what I've found for scaling. Apparently nothing is as easy as one would assume. Hmm.
"To generate complicated 3D scene animations multiple transformations may be required. For example, scaling of an object not centered at the origin (0,0,0) also results in translation of the object. So, to rotate the object around a fixed point requires three separate transformations: 1)translate that point to the origin, 2) perform the rotation, and 3) translate the point back to the original position."
I got your adjusted code to find all the vertices by using both >= and <=. This seems to work, so far.
I'm not sure how one would go about placing polygons in a region, based on edges. I can find the edges, but PoserPython, unlike Blender, doesn't recognize that the edges exist. So I can get the start and end points, but if neither of these vertices lies within the polygon, presumably some kind of special handling for the line of the edge intersecting the box would be needed. Complicated. Svdl's approach would rely on making each polygon region larger than the default octree zones, which means looping through more polygons for each zone. So it looks like there is a potential slowdown, either way....
Could a normals direction comparison be used to screen out edge bound checks before we bother running them? Or would that potentially miss too many cases?
Ah, I'm afraid I don't follow you, regarding the mixboxes.
===========================sigline======================================================
Cage can be an opinionated jerk who posts without thinking. He apologizes for this. He's honestly not trying to be a turkeyhead.
Cage had some freebies, compatible with Poser 11 and below. His Python scripts were saved at archive.org, along with the rest of the Morphography site, where they were hosted.
Cross-posted.
*"So you might want to figure out your scaling problem for acedemic reasons, but it won't be a good solution to the problem :)."
How can the polys be placed by edge, then? It seems like a whole series of line-plane intersection or line-box intersection checks would be needed....
===========================sigline======================================================
Cage can be an opinionated jerk who posts without thinking. He apologizes for this. He's honestly not trying to be a turkeyhead.
Cage had some freebies, compatible with Poser 11 and below. His Python scripts were saved at archive.org, along with the rest of the Morphography site, where they were hosted.
This site uses cookies to deliver the best experience. Our own cookies make user accounts and other features possible. Third-party cookies are used to display relevant ads and to analyze how Renderosity is used. By using our site, you acknowledge that you have read and understood our Terms of Service, including our Cookie Policy and our Privacy Policy.
*"you mentioned earlier that you were just applying deltas, but it looks like you still adjust 'all' verts and not just the ones involved in some morph."
*Every vertex ends up with a new delta assigned to it. This is the problem with the lower resolution to the higher, which leads to the "piggybacking". Every vertex is finding a matched vertex and adapting the deltas of that match for the new mesh. So we are transferring deltas, not looking at actual "shapes" (as in NoPoke, say, which uses the world vertex positions; an approach which may explain why Ockham's scripts don't fall prey to the RAM leaks and which may, therefore, clarify some serious limits in PoserPython's potential).
I haven't run across a situation yet where a matched vertex fails to find a delta listing to convert. I hadn't even looked at it that way; I've assumed we want to move all of them. Could that be the secret of the current successes? I don't know. I bumble through a lot of happy accidents....
More later, after I've read in more detail offline.... AOL likes to boot me off during the day.
===========================sigline======================================================
Cage can be an opinionated jerk who posts without thinking. He apologizes for this. He's honestly not trying to be a turkeyhead.
Cage had some freebies, compatible with Poser 11 and below. His Python scripts were saved at archive.org, along with the rest of the Morphography site, where they were hosted.