Cage opened this issue on Dec 20, 2006 ยท 1232 posts
Cage posted Mon, 08 January 2007 at 5:10 PM
I'm a bit worried that I may have misunderstood again, but it also seems evident that I omitted a few ideas from my outline.
First, I see a definite benefit to screening out zero deltas. This will result in slimmer morph targets. I'm embarassed that I didn't think of that myself. But I don't understand where we'd get any greater benefit from such screening in terms of simplifying or speeding up the process. The current working vertex-to-vertex script will simply write zero deltas for meshB where it receives zero deltas from meshA. So when a nose morph is processed, we end up with a delta for every vertex (which should change), but most of those deltas don't affect the outcome beyond inflating the size of the targetGeom reference in the .cr2.
The morph with which I've been testing during development is a full-head character morph which moves 90% or more of the vertices in the Vicky 1 head. In the case of such a morph, screening out the zero deltas up front doesn't present as great a benefit as it would in the example of a mere nose morph. We still have to create deltas for 90%+ of the vertices in Vicky3/meshB (assuming the areas of relative density in both meshes are similar; a disproportionately dense back of the head in Vicky 3 would reduce the percentage, presumably).
This test case led me to conclude that one of the fundamental ideas of the script needed to be the use of data files to store correlated vertices. Since we don't know which correlations we'll need in any case and extreme cases (like my character morph) may require up to a full meshA to meshB comparison, the overall process makes more sense to me if we split it up. So I developed a method which allows comparison up-font of elements that will remain constant between the two actors.
To do this, I deviate from all of the examples I've seen by Ockham and others. Whereas those examples always seem to work with the world vertex positions of the actual actors in Poser, I decided to work with the source geometries for those actors. (I failed to clarify that in my notes.) The source geometries will remain constant. That's the fundamental difference in approach between the procedure I've tested and that used in NoPoke and elsewhere. NoPoke uses Set to change the actor geometry, albeit temporarily. Then it works with the world vertex "shape" of that geometry when it makes comparisons to develop the final morph. In that situation it seems to me that each morph will have to run a full comparison check independently, because the surfaces - the "shapes" - will differ in each case. There is no constant that can be used to speed up or ultimately simplify the process. Which is fine when you're just looking at an isolated area morph, like a nose, but not so helpful (as I understand it) when the entire head geometry needs to be considered. So I rejected this approach at the outset.
Unfortunately, this rejected approach may be necessary - if this is to be workable within the functional limits of PoserPython. My method of comparing the base geometries requires that I store a lot of information within Python for processing. This is apparently the reason for the RAM leakage. The world vertex approach (of NoPoke) allows certain data to be stored using Poser internal methods, by using the Set method to change the geometry. Python can then ignore that information until we need it, and there's no (or at least less) leakage. As far as I can tell. But to get this benefit, I need to accept the need to run every mesh comparison for every morph. Which doesn't seem feasible in all cases.
So my current problem is apparently one of either accepting RAM leaks, accepting the need for full mesh comparisons in each case, or stepping beyond PoserPython's limitations. If the RAM leaks are due to some other cause, this problem can be resolved (yet I lack the programming acumen to determine this). If there's a way to find some constant between the meshes when using the world vertex approach, this problem can go away. If anyone sees a way out of my dilemma, I'll be happy. :) I don't see one, however, which allows me to improve things while staying within PoserPython's limits. So I've been considering stepping outside of PoserPython. Is there anything else that can change that I'm not considering? If you see anything, please tell me! :)
Here's the thread link for Ockham's Rosie and Paris example.
http://www.renderosity.com/mod/forumpro/showthread.php?message_id=2863715&ebot_calc_page#message_2863715
===========================sigline======================================================
Cage can be an opinionated jerk who posts without thinking. He apologizes for this. He's honestly not trying to be a turkeyhead.
Cage had some freebies, compatible with Poser 11 and below. His Python scripts were saved at archive.org, along with the rest of the Morphography site, where they were hosted.