Cage opened this issue on Dec 20, 2006 · 1232 posts
Spanki posted Mon, 08 January 2007 at 10:52 PM
Quote - I'm a bit worried that I may have misunderstood again, but it also seems evident that I omitted a few ideas from my outline.
It's quite possible that I have misunderstood. I really should sit down and study your previous scripts - I just hadn't had the time to do so yet.> Quote - First, I see a definite benefit to screening out zero deltas. This will result in slimmer morph targets. I'm embarassed that I didn't think of that myself. But I don't understand where we'd get any greater benefit from such screening in terms of simplifying or speeding up the process. The current working vertex-to-vertex script will simply write zero deltas for meshB where it receives zero deltas from meshA. So when a nose morph is processed, we end up with a delta for every vertex (which should change), but most of those deltas don't affect the outcome beyond inflating the size of the targetGeom reference in the .cr2.
...if they are in fact zero, then I have misunderstood what your process was doing - sorry about that. The speed-up I refer to would come from not (pre)processing the entire actor/mesh, but only the parts of it involved in the morph, but more on this below...
Quote - The morph with which I've been testing during development is a full-head character morph which moves 90% or more of the vertices in the Vicky 1 head. In the case of such a morph, screening out the zero deltas up front doesn't present as great a benefit as it would in the example of a mere nose morph. We still have to create deltas for 90%+ of the vertices in Vicky3/meshB (assuming the areas of relative density in both meshes are similar; a disproportionately dense back of the head in Vicky 3 would reduce the percentage, presumably).
I agree with all of the above... with the added comment that I don't personally hold out much hope for doing such broadly scoped morphs in the first place - due to the differences in shapes as discussed earlier. If the morphs are generic enough in nature (make face taller, wider, etc) then they'd probably work fine, but if it does several more specific things (make nose longer, make ears pointy, push lips out, wrinkle forehead), then you've still got the dis-similar shape issue.
Quote - This test case led me to conclude that one of the fundamental ideas of the script needed to be the use of data files to store correlated vertices. Since we don't know which correlations we'll need in any case and extreme cases (like my character morph) may require up to a full meshA to meshB comparison, the overall process makes more sense to me if we split it up. So I developed a method which allows comparison up-font of elements that will remain constant between the two actors.
Again, I agree with that logic, except that it relies on the assumption that general and re-usable vertex correlations can be made programatically for future use.
Anyway, I think we're mostly communicating now. I was making some assumptions about what your script did, based on some of the images posted, but I'll take your descriptions above as gospel :).
So, let's get back to the basics... there are two primary issues that need to be resolved to transfer morphs between figures...
1. Mesh Topology Differences
This is a problem in determining how vertices from one mesh correlate to the vertices of the other (with a different mesh topology). Assuming the meshes were nearly identical in shape, but have different topologies - this would be a case like hi-res V3 vs lo-res V3 or the hi/lo Mikis that Joe is working on.
The "closest vertex" approach to solving this has piggy-back issues. The "vertex projection to plane resulting in weighted list of vertices" approach that I described back on page 2 should produce better results. And svdl's "weighted average" method wasn't spelled out much, but sounds like it would be better as well.
2. Mesh Shape Differences
Note that none of the above really address this issue at all. If the nose of one figure is near the lips of the other figure, then none of the above methods do anything to handle that - this has to be considered separately. So the question is how to handle it. And the answer is - I don't know :).
Without some MIT grad coming up with some heuristic algorythms that recognize amorphous 'features' of a mesh (nose, nostrils, lips, ears, etc), I don't know of any way to handle it purely programatically (let alone spell the words I just used above ).
You may or may not know that PhilC's WW also doesn't try to do this strictly programatically either. He has to sit down and produce the datasets for each new mesh needed to correlate the shapes in some way that's usable by his scripts.
As mentioned earlier, if the shapes are 'relatively close', then due to the nature of many organic morphs, you can probably still get decent results. But as you get more specific with the morphs, the closer the shapes are going to need to be.
You can either have the human operator position and shape the meshes to get them similar before running the script, or someone is going to have to sit down and do that for entire figures to build reusable datasets that can be loaded at runtime.
This was the point I was trying to make earlier... if you have the skills to morph one figure into the shape of another, then you should be skilled enough to make the morphs you need to start with :). It's kind of a catch-22 situation.
Cinema4D Plugins (Home of Riptide, Riptide Pro, Undertow, Morph Mill, KyamaSlide and I/Ogre plugins) Poser products Freelance Modelling, Poser Rigging, UV-mapping work for hire.