Tue, Feb 25, 2:58 PM CST

Renderosity Forums / Poser Python Scripting



Welcome to the Poser Python Scripting Forum

Forum Moderators: Staff

Poser Python Scripting F.A.Q (Last Updated: 2025 Feb 05 6:41 am)

We now have a ProPack Section in the Poser FreeStuff.
Check out the new Poser Python Wish List thread. If you have an idea for a script, jot it down and maybe someone can write it. If you're looking to write a script, check out this thread for useful suggestions.

Also, check out the official Python site for interpreters, sample code, applications, cool links and debuggers. This is THE central site for Python.

You can now attach text files to your posts to pass around scripts. Just attach the script as a txt file like you would a jpg or gif. Since the forum will use a random name for the file in the link, you should give instructions on what the file name should be and where to install it. Its a good idea to usually put that info right in the script file as well.

Checkout the Renderosity MarketPlace - Your source for digital art content!



Subject: Moving morphs between different figures


Spanki ( ) posted Fri, 29 December 2006 at 3:04 AM · edited Fri, 29 December 2006 at 3:05 AM

file_363949.jpg

Here you go...

It looks like the math pans out.  In this image, the values higher on the image are the distances between the vertex A and the green intersection vertices 1,2 and 3.  The values below them are the distances from vertex A to the line B-C along the lines through A-1, A-2 and A-3, represented by the blue dots at the bottom.  Here's the 3 computed weights...

0.385427170553870397511318682665 = 1.0 - (150.539 / 244.949 = 0.61457282944612960248868131733545)
0.385431156655401722446309822305 = 1.0 - (140.936 / 229.325 = 0.61456884334459827755369017769541)
0.385430688663592821333858063478 = 1.0 - (137.422 / 223.607 = 0.61456931133640717866614193652256)

...note that the weights are all the same (within 4 decimal places... the software only gave me 3 decimal places of measurement).  So the weighting for A for each intersection point 1,2 and 3 would all be the same.

Cinema4D Plugins (Home of Riptide, Riptide Pro, Undertow, Morph Mill, KyamaSlide and I/Ogre plugins) Poser products Freelance Modelling, Poser Rigging, UV-mapping work for hire.


Spanki ( ) posted Fri, 29 December 2006 at 3:55 AM

I was worried that my calculations only panned out on triangles with equal length sides, so as a sanity check, I just tried it on one without and got the same results and the weights all still added up to 1.0, so we're good there too :).

Cinema4D Plugins (Home of Riptide, Riptide Pro, Undertow, Morph Mill, KyamaSlide and I/Ogre plugins) Poser products Freelance Modelling, Poser Rigging, UV-mapping work for hire.


Spanki ( ) posted Fri, 29 December 2006 at 4:02 AM

Just as an aside... I'm still quite leary about these computations doing any kind of morph transfer between dis-similar figure shapes.  In the case of low-res V3 -> hi-res V3, or between the lo/hi Miki sample that Joe is doing, then this should help.  But I don't think you can programatically know enough about the topological shape differences between say V3 and Miki to transfer morphs between them.

IIRC, you started this thread/process off with the intention of transfering some morphs from V1 (?) to V3... I don't recall off-hand how similar thier face shapes are, but things are going to go badly if they are very much different.

Cinema4D Plugins (Home of Riptide, Riptide Pro, Undertow, Morph Mill, KyamaSlide and I/Ogre plugins) Poser products Freelance Modelling, Poser Rigging, UV-mapping work for hire.


Cage ( ) posted Fri, 29 December 2006 at 2:27 PM

*"IIRC, you started this thread/process off with the intention of transfering some morphs from V1 (?) to V3... I don't recall off-hand how similar thier face shapes are, but things are going to go badly if they are very much different."

*That's apparently a bit of a limitation with the present method, as well.  V3 to V1 actually works fairly well, although I haven't tested any complex morphs involving the ears, which don't line up - I don't have any V3 shaping morphs.  JoePublic has found some trouble transferring morphs between dissimilar heads, but found a workaround involving lining up the geometries so the mouths corresponded decently with one another.  This makes me wonder if some sort of 'marker' method could be used.  Place a box at the mouth, nose, ears, for both the source and the target, and correlate verts and polys within the corresponding boxes.  Maybe.  

I guess we'll see....  All of this probably deserves a smarter programer.  Then it would probably end up being a 'for sale' script somewhere, however.

Thank you again.  I hope to get things together to start testing these ideas, within the next few days.  I'm slowly groping toward both an understanding and a seemingly feasible structure.  Hopefully....  :)

===========================sigline======================================================

Cage can be an opinionated jerk who posts without thinking.  He apologizes for this.  He's honestly not trying to be a turkeyhead.

Cage had some freebies, compatible with Poser 11 and below.  His Python scripts were saved at archive.org, along with the rest of the Morphography site, where they were hosted.


JoePublic ( ) posted Fri, 29 December 2006 at 5:44 PM · edited Fri, 29 December 2006 at 5:45 PM

file_363997.jpg

Just finished some more experiments.

Attached picture shows my attempt to transfer V3's head shape over to V4.
The script transferred the general shape quite nicely, but the geometry is completely messed up.
(I excluded the ears and eyelashes with Ockham's script)

The dark spots show zones where the mesh has even double folded.
Tried to clean it up in Wings3D, but it's too much work.
Same happened with an attempt to transfer AIKO's headshape over to V4.

Nethertheless I can only emphasize again that I'm more than happy with the results I get when transferring morphs from V3 to V3RR, or even V4 expressions over to V3RR.

:thumbupboth:


Cage ( ) posted Fri, 29 December 2006 at 7:25 PM

Yep.  What you have pictured there is what I'm struggling with when going from V1 to V3.  Vertex piggybacking.  Simply finding the nearest correlate vertex is inadequate for that sort of transfer.  I have high hopes for all of the ideas currently being discussed, however.

I managed to rescue the class method for gathering all the geometry data up front.  It runs quite quickly with the V3 head, gathering everything in less than a minute.  I think this should help speed up comparisons later as, for instance, I'll have all of the normals and planes for the polygons already there to query.  I hope that can help.

Now I'm just about ready to start putting together the line-plane intersection stuff....

===========================sigline======================================================

Cage can be an opinionated jerk who posts without thinking.  He apologizes for this.  He's honestly not trying to be a turkeyhead.

Cage had some freebies, compatible with Poser 11 and below.  His Python scripts were saved at archive.org, along with the rest of the Morphography site, where they were hosted.


Spanki ( ) posted Fri, 29 December 2006 at 8:52 PM

Yeah, just to clarify a bit, there are a couple of approaches you could take...

  1. make mesh 1 match mesh 2 as closely as possible (however it's currently morphed).
  2. develop a 'vertex mapping' between mesh 1 and mesh 2 in order to determine how to apply morph deltas from mesh 2 to mesh 1

...in the first method, the goal might be to make V3RR 'look like' Miki or even a morphed Miki (for example).  In the second method, you don't change the overall shape of V3RR, but you try to replicate Miki's "Nose lengthen" morph, or "Breasts Huge" morph, or whatever so that you are only applying the morph deltas to the existing figure.

The method I've been talking about above is method #2.  Once you determine the weighting / vertex mapping table for any two meshes, you can then use that table to (relatively) simply 'apply' [weighted] morph deltas from the source figure to the target figure.

I really hadn't looked into your existing/earlier scripts yet, but it sounds like there may be some of method #1 in there.  With method #2, by design, vertices not involved in the morph don't even get altered.  With method #1, you'd pretty much always affect all the vertices.

Cinema4D Plugins (Home of Riptide, Riptide Pro, Undertow, Morph Mill, KyamaSlide and I/Ogre plugins) Poser products Freelance Modelling, Poser Rigging, UV-mapping work for hire.


Cage ( ) posted Fri, 29 December 2006 at 11:39 PM

If I understand what you're saying, I think a variant of your #2 procedure is currently in use.  Right now I'm 'mapping' vertices in one mesh to vertices in another, then transferring adjusted morph deltas between them.  I don't really compare the meshes beyond finding the verts to correlate with one another, and I don't look at the 'shape' of the mesh beyond normalizing and applying the morph deltas.  I think Ockham's NoPoke uses something more like procedure #1, in which he compares the world vertex positions and moves the verts of one mesh to adapt to the shape of the other.  Hmm.  I've been thinking more interms of the difference between vertex and surface comparisons or between comparing the base geometry and the world vertex geometry.  I'll have to think about what you're saying.

Once the vertex mapping is in place, it seems like it could be used as a point of reference for matching two meshes's default shapes.  Earlier I speculated about using the mapping to try to adjust one object's UV's to line them up with another object's.  This at least seems feasible, but I do have a tendency to assume things will be simpler than they turn out to be.  :-P

The trick will be getting the mapping down effectively.  I'm not sure whether the 'raycasting' method of looking for line-plane intersections will return better correlations than the current method, which just tries to find the closest vertex.  Presumably they'll both have similar limitations and somewhat different strengths and weaknesses....  It would be hard to get uglier than some of the results I've ended up with while experimenting with the current vertex method.  Hoo boy.

===========================sigline======================================================

Cage can be an opinionated jerk who posts without thinking.  He apologizes for this.  He's honestly not trying to be a turkeyhead.

Cage had some freebies, compatible with Poser 11 and below.  His Python scripts were saved at archive.org, along with the rest of the Morphography site, where they were hosted.


Cage ( ) posted Fri, 29 December 2006 at 11:47 PM

file_364027.doc

I'm not sure this will be useful for anyone else, at least not without a bit of cleanup, but this script provides a Geom class which stores the geometry data in some useful arrangements that PoserPython doesn't automatically provide.  Using Geom.pverts, for instance, you can query whether a vertex index is in a given polygon.  Other variables provide lists of poly/material correlations and vert/material correlations, and the start and end points of polygon edges, as well as edge lengths, are gathered for reference.  It currently has some bounding box functions built it.  If it seems useful to anyone, I'll clean it up a bit.

Run the script in an empty Poser scene to get an example of the formatting of the data which it provides.  (And to see some boxes appear and disappear....)

The V3 head, with 38,000+ verts, runs through the class in about 20 seconds, so it doesn't seem to create any slowdown problems at this point.

===========================sigline======================================================

Cage can be an opinionated jerk who posts without thinking.  He apologizes for this.  He's honestly not trying to be a turkeyhead.

Cage had some freebies, compatible with Poser 11 and below.  His Python scripts were saved at archive.org, along with the rest of the Morphography site, where they were hosted.


Spanki ( ) posted Sat, 30 December 2006 at 1:52 AM

Quote - If I understand what you're saying, I think a variant of your #2 procedure is currently in use.  Right now I'm 'mapping' vertices in one mesh to vertices in another, then transferring adjusted morph deltas between them.  I don't really compare the meshes beyond finding the verts to correlate with one another, and I don't look at the 'shape' of the mesh beyond normalizing and applying the morph deltas.

Ahh - thanks.

Quote - Once the vertex mapping is in place, it seems like it could be used as a point of reference for matching two meshes's default shapes.

Yes... in this case, you'd be making a vertex mapping table between 2 default mesh shapes and then later using that data to transfer the morphs from the source to the target mesh.

Quote - The trick will be getting the mapping down effectively.  I'm not sure whether the 'raycasting' method of looking for line-plane intersections will return better correlations than the current method, which just tries to find the closest vertex.  Presumably they'll both have similar limitations and somewhat different strengths and weaknesses....  It would be hard to get uglier than some of the results I've ended up with while experimenting with the current vertex method.  Hoo boy.

 

If I understand the current method correctly (finding the closest vertex), then I suspect that this new method will be an improvement in most cases, with the assumption that the vertices are more closely following what the related surface is doing (can be influenced by more than one vertex).

Cinema4D Plugins (Home of Riptide, Riptide Pro, Undertow, Morph Mill, KyamaSlide and I/Ogre plugins) Poser products Freelance Modelling, Poser Rigging, UV-mapping work for hire.


Angelouscuitry ( ) posted Sat, 30 December 2006 at 3:09 AM

Hi Cage!

I know Modeler applications can alter the resolution of a mesh.  Do we have anything like this functions in Python; have you considered just injecting a  Low Res. Target with pixels, just to make the Resolutions even to start?  Then you may be able to rule out piggybackng, to a degree, if you use an Ocktree(Small areas at a time)

Speaking of Ocktrees, I think this approach  would be rather affective where you mention only comparing Eyelash to Eyelash, and Lip to Lip.  The drawback might be needing such custom detail at every figure.  But then again this is'nt something that could'nt be done with little patience.  I've already spent considerable time creating some, of these Groups in V3 figure's Head, for my magnet work  If you're interested let me know. I've those, and with any anticipation I'll draw more for Sydney, M2, JamesG2, and/or Kelvin? 

:tt2:


Spanki ( ) posted Sat, 30 December 2006 at 10:06 AM

Angel, simply making the vertex counts match really doesn't help anything in this case.

Cinema4D Plugins (Home of Riptide, Riptide Pro, Undertow, Morph Mill, KyamaSlide and I/Ogre plugins) Poser products Freelance Modelling, Poser Rigging, UV-mapping work for hire.


Cage ( ) posted Sat, 30 December 2006 at 2:08 PM · edited Sat, 30 December 2006 at 2:09 PM

The vertex count issue is hopefully only relevant with the current method used by the script.  What you suggest (injecting a lo res?) would only seem to work after we already have some sort of vertex mapping, if we're talking about intensifying vertcounts.  Otherwise we wouldn't know where to put those new verts.  But Spanki (and Ockham) is right... the vertcount issue should be irrelevant if the method under discussion works as hoped.

As far as subdividing the correlations by materials or groups, I think that may help, even with what Spanki is outlining (which I hope to be able to implement...).  I've added the foundations for such comparisons of materials into the geom class.  It would probably require that the user specify which materials in the source to correlate with which materials in the target, because naming will differ between cases.

===========================sigline======================================================

Cage can be an opinionated jerk who posts without thinking.  He apologizes for this.  He's honestly not trying to be a turkeyhead.

Cage had some freebies, compatible with Poser 11 and below.  His Python scripts were saved at archive.org, along with the rest of the Morphography site, where they were hosted.


Cage ( ) posted Thu, 04 January 2007 at 10:45 PM · edited Thu, 04 January 2007 at 10:48 PM

file_364604.doc

Well, here's what's happening.  I have a working method to find the polygons for each vert, as outlined by svdl on page 1.  But there is a serious flaw somewhere which I can't seem to isolate and repair.  Running the current process with anything but objects of the lowest vertex counts leads to serious RAM leakage while comparing the meshes.  To the point at which the method becomes unworkable.  I seem to be fighting with intrinsic flaws in Python's memory management.  These are fixed in Python 2.5, allegedly, but that doesn't help us, in PoserPython-land.

So until and unless I either wake up with a magically increased IQ or someone smarter and more Python adept comes along to fix the problem, the script looks like it may be sunk.  I think this script concept could be a wonderful addition to the Poser arsenal if it can be made to work, but making it work probably requires someone better at this sort of thing than I am.  

The attached is the current state of the development testing script for the vert-to-polygon method.  It requires a Poser scene with two props loaded.  If anyone want more detail about the code, hopefully to try to work out the trouble, I'll be happy to explain further. 

===========================sigline======================================================

Cage can be an opinionated jerk who posts without thinking.  He apologizes for this.  He's honestly not trying to be a turkeyhead.

Cage had some freebies, compatible with Poser 11 and below.  His Python scripts were saved at archive.org, along with the rest of the Morphography site, where they were hosted.


Angelouscuitry ( ) posted Fri, 05 January 2007 at 1:27 AM · edited Fri, 05 January 2007 at 1:28 AM

I've seen some of your other scripts, that do'nt depend on Poser Python; and I'm kind surprised you've stuck with it this far; especially considering how complicated this task is compared to those others?

I understand why youd want to take care of this with Poser Python, but I would'nt let it hold you back.  Anyone happy to use Poser python to get their work done, would sure look twice before dismissing any other script that would help.   Just think of how many external 3rd part utilities there are for Poser?

I'd bet if you were to get this to work, in any other python, you'd find a place for E-Frontier to strive toward; as well as more than a handful of people willing to take the script downhill, back into Poserdom, with bells and whistles?
 


Cage ( ) posted Fri, 05 January 2007 at 1:47 AM

I don't know enough about the math involved to really be able to develop this without using PoserPython for testing and visualization, unfortunately.  And I'm not altogether sure switching to a standalone Python 2.5 script would rescue the process from the current problems.  Basically, I don't know enough about programming.  Maybe I'll make more progress on this some time in the future.  Hopefully someone else will run with the idea before that.  The math really isn't so bad, although I don't understand why any of it is appropriate at any given point.  The problem is what seems to have been poor planning with Python's design, in the early days.  Certain memory handling was implemented to speed things up, presumably when computers were slower and less powerful.  I've read some serious rants about this memory leaking, online, coming from professional programmers.  If the pros can't beat this problem, I'm not going to keep banging my head against the wall over it....

Bummer, huh?

===========================sigline======================================================

Cage can be an opinionated jerk who posts without thinking.  He apologizes for this.  He's honestly not trying to be a turkeyhead.

Cage had some freebies, compatible with Poser 11 and below.  His Python scripts were saved at archive.org, along with the rest of the Morphography site, where they were hosted.


Angelouscuitry ( ) posted Fri, 05 January 2007 at 5:04 PM

Seriously,

I hav'nt even got a chance to try!

My P7SE Setup DVD arrived last Tuesday, blank.  E-Frontier has promised me a new one, but I'm still waiting!

Do you think it will be worth trying Sydney to V3; may I have a copy of the compile.data file for her?

=  (


Cage ( ) posted Fri, 05 January 2007 at 9:14 PM · edited Fri, 05 January 2007 at 9:15 PM

The initially posted scripts still work, of course.  The process won't return better morph transfer results now than at first, but the processing will be much faster with the final build of the vertex-to-vertex script than it was with the initial release.  That, unforutnately, means I never found a way to overcome the low res to high res problems.  JoePublic's successes are still possible elsewhere, however.  If you want to know about Sydney to V3, you'll have to try it.  Use the final vertex comparison script if and when you test.  It will be faster than the first script that was posted.

Nobody should use the "class" I posted!  It does organize the data into nice, useful lists, but those lists, when actually used for anything, create ugly reference loops which seem to be the source of much of the RAM problem with the later vertex-to-polygon efforts.

I am still tinkering with the above code.  I'm managing to clear a lot of garbage references, finally, but the RAM isn't being released - apparently there are still unrevealed cycles somewhere.  But I've finally figured out how to track the number of references to objects.  Perhaps that can lead somewhere.  I'm not overly optimistic, however.  Don't hold your breath.  :)

===========================sigline======================================================

Cage can be an opinionated jerk who posts without thinking.  He apologizes for this.  He's honestly not trying to be a turkeyhead.

Cage had some freebies, compatible with Poser 11 and below.  His Python scripts were saved at archive.org, along with the rest of the Morphography site, where they were hosted.


adp001 ( ) posted Sat, 06 January 2007 at 8:37 AM

Perhaps you should really switch to numpy or at least numeric (seams that numpy is only available for Python >= 2.3).
Both libs are made to use and manipulate very large arrays.

"NumPy derives from the old Numeric code base and can be used as a replacement for Numeric"

http://numpy.scipy.org/#NumPy

Description for using arrays (from numeric):
http://numpy.scipy.org/numpydoc/numdoc.htm




nruddock ( ) posted Sat, 06 January 2007 at 9:07 AM

Quote - Perhaps you should really switch to numpy ...

Installing a new Python module into Poser is going to be awkward for most of the people who will want to use this script (assuming a copy of numpy compiled appropriately is available).


adp001 ( ) posted Sat, 06 January 2007 at 10:42 AM

Numeric is a Poser-Python built-in.




Angelouscuitry ( ) posted Sat, 06 January 2007 at 1:30 PM · edited Sat, 06 January 2007 at 1:32 PM

/me checks her lipstick...*

"Installing a new Python module into Poser is going to be awkward..."

*I'd hop, upside down, on one arm, to see a morph transfer between unlike figures; for the first time ever.

Never mind it being Free...


Spanki ( ) posted Sat, 06 January 2007 at 1:47 PM

I don't want to dampen your enthusiasm, but you might have some unreal expectations...

Frankly, I don't see transfering say facial morphs between Sydney and V3 or V4 as being particularly likely with any sort of acceptable results - unless you started out by shaping the heads as similar to each other as possible (and if you're going to go through all that trouble and have the skills to accomplish it, you'd be better off just making the morphs yourself to start with :).

I think the real utility of this tool will be in transfering morphs from figures to clothing (which, by nature have 'similar' shapes).

Cinema4D Plugins (Home of Riptide, Riptide Pro, Undertow, Morph Mill, KyamaSlide and I/Ogre plugins) Poser products Freelance Modelling, Poser Rigging, UV-mapping work for hire.


Spanki ( ) posted Sat, 06 January 2007 at 1:50 PM

...I just wanted to qualify that with... "accepable" is of course a subjective term.  But in general, the closer the two shapes are to start with, the better the results will be.

Cinema4D Plugins (Home of Riptide, Riptide Pro, Undertow, Morph Mill, KyamaSlide and I/Ogre plugins) Poser products Freelance Modelling, Poser Rigging, UV-mapping work for hire.


Cage ( ) posted Sat, 06 January 2007 at 1:58 PM

I did try switching to numeric.  Part of the problem is that I need lists that are three-layers deep in a couple of situations, to organize the polygon data.  If this data isn't organized up front, the whole process is slowed terribly, in addition to leaking RAM.  Numeric wouldn't allow a three-layer array.  Numeric for PoserPython also lacks the list methods available for Numeric in later Python releases.  It didn't really seem to offer much by way of improvement.  :(

I'm not sure how much of my problem is coming from the effort to organize the data and how much may be intrinsic to the calculations, however.  Poser allows access to all of the necessary data, but only in a bare-bones manner.  All of it has to be sorted and organized before it can be used.  Sorting it during the process of running vertex to polygon comparisons has seemed like a problem, when I've tried it.  I suspect this whole morph transfer process can work if the programmer knows how to structure something well enough to minimize complications in the process.  Unfortunately, I lack both the programming skill and the math background to approach this more efficiently.  

And there will be inevitable functional parameters in terms of what the resulting script can really achieve, yes.  I'm not sure what those parameters would end up being.  All of this may only be able to work well in situations like those in which joePublic has had success.  I'd like to think there is more potential than that.  Unfortunately, it's looking increasingly like I lack the potential to make it happen....  :(

Still trying, however, in my haphazard way....

===========================sigline======================================================

Cage can be an opinionated jerk who posts without thinking.  He apologizes for this.  He's honestly not trying to be a turkeyhead.

Cage had some freebies, compatible with Poser 11 and below.  His Python scripts were saved at archive.org, along with the rest of the Morphography site, where they were hosted.


Spanki ( ) posted Sat, 06 January 2007 at 2:16 PM

Cage, you started off this thread by saying that your early efforts had given some decent results (on the figures in question), so there is hope.  I just wanted to point out that the wide differences in shapes (of say Sydney and V3) are probbly not going to yeild good results.

Cinema4D Plugins (Home of Riptide, Riptide Pro, Undertow, Morph Mill, KyamaSlide and I/Ogre plugins) Poser products Freelance Modelling, Poser Rigging, UV-mapping work for hire.


kobaltkween ( ) posted Sat, 06 January 2007 at 2:32 PM

ok, i know nothing about poser python, but...

this sounds a lot like what wardrobe wizard does, to some extent.  and you're saying that the problem is organizing the data while doing the comparison.  well, why can't they be user separated?  with ww, you have to analyze something before you can try to fit it.  doing so creates a .dat file with all the appropriate info.  if you try to conver something that hasn't been analyzed first, you get told to do so.  if you have analyzed it, you just convert.  analysis can take a from a seconds to several minutes (and seems to need poser to be active).  conversion is pretty quick. 

i would think that if ww could convert between the kiki and apollo (i believe that was the promo image when apollo was added), that morphs between humanoid figures wouldn't be impossible.  another fitting script seems ubiquitous.   i mean, it might be nice to have something where new figures didn't need to be added by the creator, but since it wouldn't work between figures, it  would be kind of moot (most minor figures don't have many morphs).



Cage ( ) posted Sat, 06 January 2007 at 2:51 PM · edited Sat, 06 January 2007 at 2:51 PM

Spanki, I'm despairing for my ability to accomlish anything more at this point.  This Python memory business is way over my skill level.  If the programming platform is fighting with me, I can't really counter that.  But I agree with what you're saying.  There will be limits.  It would be quite tricky to effectively correlate two geometries with extremely different forms.

The data which needs to be organized is the polygon and vertex information for the actors being compared.  The user enters this process by specifying the actors.  After that, all the verts and polys need to be looked at.  

I've never used Wardrobe Wizard, so I'm not sure what it does in full.  I think it goes a bit beyond all of this and alters joint parameters and moves the actual base geometry to line up with the target figure.  Or does it?  I'm not sure....  There could be overlapping potentials.  But Wardrobe Wizard actually works, and most of the discussion in this thread is still hypothetical.   :)

===========================sigline======================================================

Cage can be an opinionated jerk who posts without thinking.  He apologizes for this.  He's honestly not trying to be a turkeyhead.

Cage had some freebies, compatible with Poser 11 and below.  His Python scripts were saved at archive.org, along with the rest of the Morphography site, where they were hosted.


kobaltkween ( ) posted Sat, 06 January 2007 at 3:02 PM

yep, ww does copy over joint parameters.  it can do this automatically (just a regular convert) or manually (if you need to do a more involved conversion- for instance if you want to leave grouping alone to allow morph transfer.).   but ww wasn't my point.  i was more thinking that i would be most people would find it more than acceptable to sit and wait for  a figure (or body part) analysis prior to morph transfer.  just let them know that it's happening and progress on the task.  if that would make things easier, that is.



Cage ( ) posted Sat, 06 January 2007 at 3:35 PM

I see.  So you're saying that I shouldn't be so concerned about the speed of processing?  Hmm....

Out of curiosity: when you allude to 'morph transfer' using WW, how do you mean that?  This thread is about trying to transfer morphs between actors with incompatible geometries, such as the Vicky 1 head to the Vicky 3 head.  If I understand WW, it actually adjusts the shape of the clothing to the contour of the new figure, carrying any existing clothing morphs along with the change.  But it doesn't actually move the morph information between one figure and another.  Is that correct?  If WW can do this already, there's no point in giving myself a headache over any of this....  :)  (And if WW can do as much as it can, one would assume the infrastructure for something like the morph transfer discussed in this thread might already be in place....)

===========================sigline======================================================

Cage can be an opinionated jerk who posts without thinking.  He apologizes for this.  He's honestly not trying to be a turkeyhead.

Cage had some freebies, compatible with Poser 11 and below.  His Python scripts were saved at archive.org, along with the rest of the Morphography site, where they were hosted.


Angelouscuitry ( ) posted Sat, 06 January 2007 at 4:42 PM

Cage - I was thinking that your refering to a memory leak as something preventing completion of calculations.  Anything that would ever work, say overnight(?...), is fast enough for me!

Spanki - I just don't theorize your limits.  Call it wishful thinking or optimism, but human form, is Human form to me, and  I should suspect that whatever the problem is, numerically, when it's solved within the limit of what you would consider Human, then it should be solved for everything less.  I could see difficulty if the problem started with a given geometry set, but then ended in the new creation of an unknown figure(With likeness' to photo referances or random parameters,) but we are starting and ending with constants.  So,  I still do'nt understand what the big todo is with turning V3 into a sphere, a square, or any other primitive.  I would just start, and keep returning to, her center; to push each vertex away, in the direction it is from her center, with Rays, until each vertex has been moved, to an equal distance from the center.  The mesh may be heavy in some points but there would be a lot vertexs, still.  I do'nt see this script working much different, exept that the vertexs could stop being pushed when they reach; an enlarged, centered, copy of the target mesh.


Spanki ( ) posted Sat, 06 January 2007 at 5:04 PM

Uhm... ok, consider this scenerio:  Suppose you are trying to convert millenium baby face morphs to The Freak.  First problem is, the baby's head lines up with the freak's shin somewhere, so you dial in a nose morph on the baby and his knee-cap moves :).

Ok, so you say that you need to line up the heads in space first... ok, so we pick the baby up in the air, so his head fits inside the freak.  So far, so good... or is it?  We still have no idea how a chin morph gets transfered to the freak's chin, let alone a nostril morph - the software doesn't know what a nose is.  And yet these are 2 'humanoid form' meshes.

So yes, to some degree, I do call it wishful thinking, or optimism, which is why I posted the word of caution above.  You don't need to agree or believe me, because time will tell if I'm right or wrong about it :).

Cinema4D Plugins (Home of Riptide, Riptide Pro, Undertow, Morph Mill, KyamaSlide and I/Ogre plugins) Poser products Freelance Modelling, Poser Rigging, UV-mapping work for hire.


Spanki ( ) posted Sat, 06 January 2007 at 5:54 PM

file_364726.jpg

...this image shows Sydney and V3 in outline mode... I took the liberty of lining up thier eyes, as a point of reference, but I could have used their noses or lips or ears or chins - none of which would have made a difference in the shape disparity between them.

As you can see, even if you were to inflate them both out into a sphere, that wouldn't line up 'where' on that sphere the ears would be, relative to where on the sphere the ears ended up for a mesh with a different base shape.  So even is they had the exact same primitive shape, that still doesn't line up the individual features.  It's an interesting theory, but I'm afraid that it doesn't work.

Cinema4D Plugins (Home of Riptide, Riptide Pro, Undertow, Morph Mill, KyamaSlide and I/Ogre plugins) Poser products Freelance Modelling, Poser Rigging, UV-mapping work for hire.


Cage ( ) posted Sat, 06 January 2007 at 7:13 PM

My thought for dealing with the above question was to have the user position boxes to specify where certain features fall on the source and target meshes.  Then the verts and polys between the corresponding boxes would be compared to one another, expanding on the basic methods of the current 'octree' subdivision process.  That might offset some of the problems of comparing dissimilar shapes.  But it would only go so far, even if it were to work.

Not that I expect to get there....  :sigh:

===========================sigline======================================================

Cage can be an opinionated jerk who posts without thinking.  He apologizes for this.  He's honestly not trying to be a turkeyhead.

Cage had some freebies, compatible with Poser 11 and below.  His Python scripts were saved at archive.org, along with the rest of the Morphography site, where they were hosted.


kobaltkween ( ) posted Sat, 06 January 2007 at 7:36 PM

on ww -

spanki's post and yours prompt me to send you on to philc.  what wardrobe wizard does is fit stuff that isn't clothes or shoes to a supported figure.  what does a program care whether the  mesh and character file belongs to a human or a body suit?  how would it tell the difference?  that said, while it converts hair, i've never tried it on masks.  and i know he just avoided feet and hands altogether. 

on the knowing which part is which, well ww manages, and my guess is that it has to do with what philc does to support a figure.  also, he uses magnets to deform the converted surface.  but you might talk to him about what you're trying to do.



JoePublic ( ) posted Sat, 06 January 2007 at 8:43 PM · edited Sat, 06 January 2007 at 8:47 PM

file_364734.jpg

(click to show fullsize)

Cage's script works best when transferring morphs from a highrez to an identically shaped lorez mesh.
As I almost exclusively use V3RR and many custom morphs are avaliable for standard V3 only, this allone is a great help.
It also allowed me to create a fully functional lorez MIKI as well as lorez versions of Laura, Maddie, ...etc.

But the script also works between completely unrelated meshes, even if there are a few caveats.

Transferring V3's head shape over to V4 doesn't work.
Same with transferring V3's bodymorphs over to "V4toV3".
(I really hoped that would work because V3 and V4toV3 have identical shapes)

But transferring V4's expression morphs over to V3RR worked quite nicely as you can see on the attached picture.
A bit of cleaning up is necessary for open mouth morphs, but this is far easier than creating expression morphs completely from scratch.

I had to create a "dummy V4" cr2 that places V4's head as close as possible over V3RR's head to line up the mouth and nose and eyes.

Of course the closer both meshes are, the better the result will be.
For example I was able to transfer those V4 expressions from V3RR over to V2LO with no further modifications necessary.


Cage ( ) posted Sat, 06 January 2007 at 9:12 PM

Hmm.  It looks like Ockham's WalkThisWay.py, version 1, suffered from these same memory problems.  He suggests that the problem is a Poser memory leak, specifically with Poser 5.  WalkThisWay2 avoid the problem using heavy restructuring which seems to remove most of the collision or positioning comparison in WTW1.  So apparently no real solution was found for the problem, but it may be that this shows that the problem is indeed Poser.

Which may mean a standalone Python 2.5 script, at least for the comparison part of the process, would be the best solution.  Unless anyone wants to take the thought a step further by building some sort of external application or module in C++....  (No takers?  Alas....)  I'll see if this seems feasible.

cobaltdream - I think there could be some conflict of interest for PhilC about sharing secrets used by WW to assist in the creation of a freebie script.  I wouldn't go around spilling the secrets if I were him....  (And yet, I somehow liked the Poser world better before it was marketplace and profit motive driven....)  Again, alas.

JoePublic, that's the sort of success I've seen with V3 to V1.  Not bad, generally.  You say mouth geometries need cleanup... but what about ears?  How well are they faring?

===========================sigline======================================================

Cage can be an opinionated jerk who posts without thinking.  He apologizes for this.  He's honestly not trying to be a turkeyhead.

Cage had some freebies, compatible with Poser 11 and below.  His Python scripts were saved at archive.org, along with the rest of the Morphography site, where they were hosted.


JoePublic ( ) posted Sat, 06 January 2007 at 9:35 PM

Hmm, never tried to transfer ear morphs, as they are easy to do from scratch with magnets or using Wings3d.

When I tried to transfer V3's head shape over to V4, the ears got messed up big time, much more than the rest.
When I created a LowRez Maddie, the earshape transferred quite nicely.

I'd say unless you want to tranfer the exact shape of the MilCat's or Mildog's head over to V3 or V4, I just wouldn't bother too much with the ears.
Same for the teeth or tounge.

It would be nice to have a perfectly working "one click" script, but I really don't expect a Poser python script to have the same shrinkwrapping functionality that you can otherwise only find in a select few of top-end modellers.
(And even with them it is a multi- step process)


kobaltkween ( ) posted Sat, 06 January 2007 at 9:44 PM

um,  i think you misunderstand.  i was thinking he might be of help figuring out solutions to what you're doing, not giving you precise info about ww.  and it's his extreme helpfulness and politeness as well as the fact that you're actually trying to do something different that makes me think he might give you a pointer or two.  also, it can't hurt to ask.  the worst he could do is say no.  the best he could do is ask you to collaborate on a ww ii or some other type of app.  it would be wonderful to have something transfer morphs as JoePublic has done.

on the subject in general - i think a lot of people are too insecure.  they get so afraid someone will take their knowledge and use it themselves that they forget that most of the work is in implementing not understanding.   i think i understand the principles behind what you're doing from your descriptions.  if my life depended on it, i couldn't script it myself.  face_off made a tutorial describing exactly how to set up his skin shader, and his products are still popular.  dreamlight made a tutorial about how to make his light dome, and gave away a low end version, and his product still sold well.  people don't generally want to do it themselves. 



Angelouscuitry ( ) posted Sat, 06 January 2007 at 10:55 PM · edited Sat, 06 January 2007 at 10:58 PM

" First problem is, the baby's head lines up with the freak's shin somewhere,"

Your thinking is much like having a background with Adobe Photoshop, but not Illustrator.  Photoshop is a Pixel based program, where the more you enlarge an image the more you lose the definition of that image.  Where as with a Vector based program like Illustrator you can take an image about the size of a any size and post it on a billboard, without any loss in the sharpness of the edges.

" We still have no idea how a chin morph gets transfered to the freak's chin,"

Well you've tried to go right to the edge of what the acceptable limits of Human form is.  I doubt the Freak is human, but and even if her were that's as far as you'd ever get...I did agree that groups might help us along a little, but do'nt forget your still talking "Chin," "Nose," and "Ear."  And that you always have, and always will be.  Nobody is going to change that, nor the defaults of the Mil' Baby or Freak.  Kinda like my primitive excersize*

"time will tell"

*Indeed.  I just ca'nt believe I'm here, before it has already.  My hypothesis is that this is inevitable. 

"I took the liberty of lining up thier eyes"

*Close, but no cigar.  You ca'nt line up any one part, especially where any others suffer, like where the outline of one wonders in and out of the other's.  This is kinda where Groups or Sundivision fail, excpet to break the math up into digestable chunks...Make V3(because she is the higher resolution) slightly bigger, all the way around, so their outlines do'nt touch at any point(but just barely) Now replace each out"line" with paths of dots(pixels).  And if they are the same number of pixels this could get easy fast..., but otherwise just make sure the outer has more.  Then take the top dot of the outer figure, and push it to the top of the other figure.  Repeat this process for the bottom, left, right, and all the in betweens dots,  and you've converted V3 t V4...at least from orthagonal... you would then need to repeat this process, from each degree of the Y axis, then Z, and then X....more if you can get more tan one ray trajectory per degree...but then when you're done you would render anything different!


Cage ( ) posted Sat, 06 January 2007 at 11:04 PM

JoePublic - I asked about ears because it seems like they might be the best example of what Spanki writes about, above.  I wouldn't expect the dog or cat ear morphs to be transferrable to a humanoid figure at all, but perhaps between the dog and the cat.  Between human figures, the ears are complex geometries which don't necessarily match up very well at all.  V1 and V3 have ears which are out of line.  But I don't have any V3 ear morphs to test how well they might convert to V1.  Kind of a moot point, however, since the vertex comparison method used in the current testing script wouldn't be used in any further developments.

cobaltdream - I misunderstand readily and often.  :)  Sorry.  I don't mean to suggest that PhilC wouldn't be nice or helpful about any questions.  I would just think it somehow inappropriate myself.  People generally do seem to be protective of their code, even in the Blender world with code released under the GNU public license.  It's kind of hard to go around asking the experts how to do things when you never know whether or not the question might offend them.  Maybe I'm too insecure about irritating people.  Irritating people also seems to be something I do readily and often.  :)

In thinking about switching away from PoserPython to a standalone script, I find myself looking at the potential of Visual Python, PyGeo, and Py2Exe.  Put together, they might enable someone of my limited math skills to produce a script with 3D display of the meshes, allowing some user specification of certain things, and visualization of the process.  Unless the sheer size of the meshes would slow them or my shoddy code could break things again, they could produce something fast which might operate like a MorphManager with a 3D preview.  The trouble with the idea (aside from my limitations) is that this would require installation of Python and several modules, or a distribution of an executable created with Py2Exe or by some other method.  The Py2Exe distributions tend to be rather heavy.  Or it could be a way of making a simple idea complicated.  Does anyone have any experience with any of these modules?  Any thoughts on whether this is a good idea or a bad one?

It all kind of makes me wish I had something other than Python to work with....

===========================sigline======================================================

Cage can be an opinionated jerk who posts without thinking.  He apologizes for this.  He's honestly not trying to be a turkeyhead.

Cage had some freebies, compatible with Poser 11 and below.  His Python scripts were saved at archive.org, along with the rest of the Morphography site, where they were hosted.


Cage ( ) posted Sat, 06 January 2007 at 11:26 PM · edited Sat, 06 January 2007 at 11:40 PM

@Angelouscuitry -

I think the trouble is that the script doesn't "know" anything about the meshes it's processing.  Have you ever seen a certain modernist painting, a portait, in which the portait's face is blank white space with the label "FACE" written on it?  Perhaps it's a Magritte?  Maybe that image kind of illustrates the problem, although it doesn't go far enough.  The computer doesn't see nose, eyes, mouth, etc.  It doesn't even perceive the topology per se, unless it's taught how.  It sees a lot of vertices which are connected in certain ways with one another.  We'd have to teach it the whole "language" of the facial form before it would know how to process effectively to compensate for mesh differences.  We'd have to teach it to see "FACE" before we could even teach it to look at facial parts.  But I provide a crummy example, I'm afraid....  Hmm.

Hypothetically, one could write something which would analyse the meshes so the program could "learn" what it's working with.  The idea seems awfully complicated.  Certainly beyond my skills and, as far as I can tell, more than PoserPython would readily want to do.  With the currently posted script, when it matches two parts of the face, the matching happens through the happy accident of finding that the closest vertex is also the most correct one for the morph transfer.  But that wouldn't always be so.  Right now, the closest vertex matches in the mouth area often confuse the teeth and the lips.  This kind of confusion could happen anywhere where the two meshes fail to line up adequately, unless we could tell the computer what we mean by "face", "nose", "ears", etc.  And even then it would only work when the conditions were right.  JoePublic mentions cat or dog ears to humanoid ears.  We think of those things as ears in both cases, but to the computer they wouldn't really have much in common at all, even if we taught it the hypothetical definitions. 

I think my box placement idea could hypothetically be used to teach the script how to associate certain matched areas with certain features in a very broad sense, but there would still be complications and the same limitations noted above and elsewhere would ultimately still hold true.  And the vertex to polygon idea may introduce new quirks when it matches verts to polys. 

===========================sigline======================================================

Cage can be an opinionated jerk who posts without thinking.  He apologizes for this.  He's honestly not trying to be a turkeyhead.

Cage had some freebies, compatible with Poser 11 and below.  His Python scripts were saved at archive.org, along with the rest of the Morphography site, where they were hosted.


Angelouscuitry ( ) posted Sat, 06 January 2007 at 11:52 PM

*"It's kind of hard to go around asking the experts how to do things when you never know whether or not the question might offend them."

*Generally the consensus is that it's better to try and fail, than than never to ahve tried at all.  Any pro would know this, and respond to your question likewise.  Whether he believes you're doing this for free, or not, his answer should always be...polite.

"a script with 3D display of the meshes, allowing some user specification of certain things, and visualization"*

Wow.  Makes me think of how similar what you're doing is to the Fae Room.  Where the Face Room goes from any odd image to a standard target, we have the added advantage of knowing the Subject figures geometry; in other words I think this would be easier, really.

"Any thoughts on whether this is a good idea or a bad one?"

*I still say let that be the least of your worries!  There are more than enough 3rd party applications to warrant this need, and most of them do'nt do half of what your goal is. 

"It all kind of makes me wish I had something other than Python to work with...."

...In fact, I severely doubt anyone would criticize any steps you'd need to take in order to make the end a mean, specifically where this would be the only method in place at it's inception!


Angelouscuitry ( ) posted Sun, 07 January 2007 at 12:10 AM

Cage - I can see where you box placement, and my Grouping theories would work fine.  I just think they'd add a necessary level of custom work/detail to each figure's use within the script.  This still is'nt anything I would'nt want to help/do. 

It's just that I'm not sure you'd always want to do this for every  new figure...And that  eventually we may not, because you would not want to predefine any parameters that would inhibit the script from making it from one nose to another.  I.e. if you try to tell a computer what a nose looks like eventually you could be wrong.


Spanki ( ) posted Sun, 07 January 2007 at 12:28 AM

Quote - " First problem is, the baby's head lines up with the freak's shin somewhere,"

Your thinking is much like having a background with Adobe Photoshop, but not Illustrator.  Photoshop is a Pixel based program, where the more you enlarge an image the more you lose the definition of that image.  Where as with a Vector based program like Illustrator you can take an image about the size of a any size and post it on a billboard, without any loss in the sharpness of the edges.

I'm quite aware that you can scale vertices infinately, but you missed my point completely. You are seeing things from a human/mind interface with no consideration of what you can or can not do or what you can or can not assume programatically.

*> *Quote - " We still have no idea how a chin morph gets transfered to the freak's chin,"

Well you've tried to go right to the edge of what the acceptable limits of Human form is.  I doubt the Freak is human, but and even if her were that's as far as you'd ever get...

I used those two figures as an the most extreme example I could think of, to illustate the more general problem that exists at the most minute levels.

You give the program 2 "meshes".  So far, the program knows (can assume) it has exactly that - 2 "meshes" and those meshes are made up of vertices.  If it looks at the group data, it can determine that they both have heads.  If it looks at the .cr2 data, it can determine the 'center' of the heads, so it could line them up to that extent.  Beyond that, it has no idea how well the contours of the shapes (that make up things like Ears, Nose, Lips, Chin, etc) relate to each other, it just knows that there's a bunch of vertices and polygons.

Your answer is that because they are vaguely similar (they both 'have' eyes, ears, nose, lips, chin, etc) that somehow there's this magical relationship that the program can work from - there's not.  

Something you (as a human) might do by hand in a modelling program - moving each vertex out to where it matches the other mesh, can not be done programatically, with the same precision, given just that information to go on.  

The fact is that the further apart in shape and position those features are from each other, the harder it's going to be to develop relationships between the coresponding vertices that make them up.

At some point (if better results are desired), then you have to give the program some sort of help/hinting...

Quote - I did agree that groups might help us along a little,

Corelating groups (or even as Cage thought of using bounding boxes) helps, but only to the extent that it restricts the set of vertices to try to match between (it refines 2 'head' groups down to 2 'lips' groups, for example), but it still doesn't fix the dis-similar positions of those vertices.

Quote -  but do'nt forget your still talking "Chin," "Nose," and "Ear."  And that you always have, and always will be.  Nobody is going to change that, nor the defaults of the Mil' Baby or Freak.  Kinda like my primitive excersize

Uhm... I'm afraid you lost me there - I'm not clear what your point is.  But taking a stab at that, we could be talking about an ashtray and a boat or even an apple and an orange meshes - again, as far as the program knows, it's just a bunch of vertices and polygons.  They may have 'generally' similar shapes, but due to the 'detail' differences between the two (assume one has a stem sloping to the left and one has a stem sloping to the right), you'd still run into problems moving morphs between them. 

Quote - *"time will tell"

*Indeed.  I just ca'nt believe I'm here, before it has already.  My hypothesis is that this is inevitable. 

And mine is that you're assuming too much :).

Quote - *"I took the liberty of lining up thier eyes"

*Close, but no cigar.  You ca'nt line up any one part, especially where any others suffer, like where the outline of one wonders in and out of the other's.  This is kinda where Groups or Sundivision fail, excpet to break the math up into digestable chunks...Make V3(because she is the higher resolution) slightly bigger, all the way around, so their outlines do'nt touch at any point(but just barely) Now replace each out"line" with paths of dots(pixels).  And if they are the same number of pixels this could get easy fast..., but otherwise just make sure the outer has more.  Then take the top dot of the outer figure, and push it to the top of the other figure.  Repeat this process for the bottom, left, right, and all the in betweens dots,  and you've converted V3 t V4...at least from orthagonal... you would then need to repeat this process, from each degree of the Y axis, then Z, and then X....more if you can get more tan one ray trajectory per degree...but then when you're done you would render anything different!

 

This is a prime example of the type of reasoning/assuming to much/misunderstanding I talk about above.  Even your example would have you (the human) making decisions as you moved the points around that would cause you to move them in a non-linear way. The process you describe here is easy to conceptualize as a human and maybe even easy to accomplish as a human in a modelling program (if not extremely tedious), but you'd be making decisions that the program has no way to know how to make and no idea that it even needed to make them.

Just so you know, while I'm not overly familiar with Python, I've been programming in C/C++ for 20+ years now (since the Commodore Amiga hit the stores) and some BASIC and assembler before that.  I've been doing 3D specific or related programming for a good 15 or more of those years.  I'm not the most math-oriented guy around and definately not the smartest guy around, but I do have a solid technical background and pretty good idea that I know what I'm talking about here :).

Cinema4D Plugins (Home of Riptide, Riptide Pro, Undertow, Morph Mill, KyamaSlide and I/Ogre plugins) Poser products Freelance Modelling, Poser Rigging, UV-mapping work for hire.


svdl ( ) posted Sun, 07 January 2007 at 8:08 AM

Material zones might help. Lips and teeth tend to get confused - well, they have different materials, and it's possible to access the material list from Python. So it's also possible to find out if a vertex belongs to the teeth or to the lips.

Still, this is NOT going to be easy. Pushing in from the outside, or pushing out from the inside, either way the inner mouth and the teeth ARE going to be troublesome. You'd ALSO have to keep the polygon orientation in mind. Example: vertex X is located on the back (inner) side of Vicki 3's teeth, say one of her front teeth. With the "pushing in" algorithm, chances are that the first V1 polygon with a "teeth" material it will encounter will be a polygon located on the "front" side. The algorithm should realise that this is not the polygon it should end up at, the vertex should travel further inwards until it reaches the back side of V1's tooth. So determining the "target" polygon should take the polygon's orientation into account, it should face the same way as the normal of the vertex you're pushing.

Material zones and groups can help, but they'll take some setting up. Acceptable when you're planning to use the script to transfer loads of morphs, a large piece of overhead when transferring only one morph.

The Tailor uses the "closest vertex" principle, and before transferring morphs it maps all vertices of the conformer to matching vertices of the conforming target. Low poly conformers tend to work better than high poly, and the more extreme morphs transfer badly. 

I've been thinking about writing a Tailor-like Python script that doesn't use "closest vertex", but instead uses a weighted average of close vertices: each "morph donor" vertex within a certain distance of the vertex to be calculated contributes to the displacement, inversely proportional to its distance. 

Maybe, one day. Time is in short supply these days...

The pen is mightier than the sword. But if you literally want to have some impact, use a typewriter

My gallery   My freestuff


Spanki ( ) posted Sun, 07 January 2007 at 9:56 AM

Quote - I've been thinking about writing a Tailor-like Python script that doesn't use "closest vertex", but instead uses a weighted average of close vertices: each "morph donor" vertex within a certain distance of the vertex to be calculated contributes to the displacement, inversely proportional to its distance. 

 

Yep, that approach is similar to the most recent idea we've been talking about in this thread (might be back a page or so by now).  But were you thinking of this being a shape-matching method, or a morph-delta-correlation method?  In other words, given the hypothetical situation of Sydney head and V3 head, does it make Syndey's head look like V3's head, or does it just transfer the 'nose longer' morph over?

Cinema4D Plugins (Home of Riptide, Riptide Pro, Undertow, Morph Mill, KyamaSlide and I/Ogre plugins) Poser products Freelance Modelling, Poser Rigging, UV-mapping work for hire.


Spanki ( ) posted Sun, 07 January 2007 at 10:50 AM · edited Sun, 07 January 2007 at 10:52 AM

file_364769.jpg

Angel,

I'm not sure we're always talking with the same thought-train, so I did this image to help illustrate the problem.  As you can see, if you do make both heads into a 'sphere' shape, then yes - they are both now spheres, but the features across the surface of the sphere's don't line up.

If you were using the same method (pushing vertices out) only on one mesh, stopping at the other mesh surface, you'd have the same (but worse-looking) problem.  Since the various features don't line up with each other, you can't correlate the vertices very well.  I could have lined up the noses and lips better, but then the ears and eyes would have been off.  Even as it is, the ears and eyes don't 'really' match very well.

Cinema4D Plugins (Home of Riptide, Riptide Pro, Undertow, Morph Mill, KyamaSlide and I/Ogre plugins) Poser products Freelance Modelling, Poser Rigging, UV-mapping work for hire.


Spanki ( ) posted Sun, 07 January 2007 at 11:59 AM

Cage,

I've been thinking about Joe's and your relative level of success with dis-similar shapes, trying to understand why you've gotten even that far :).  I'm a little confused about some of the results shown though, because the V4->V3 headshape image Joe posted seems to show a 'shape matching' approach (with the piggy-back verts ending up exactly where ather verts are), but you mentioned earlier that you were just applying deltas, but it looks like you still adjust 'all' verts and not just the ones involved in some morph. Or maybe whatever version of the script was being used was moving vertices to the position of other vertices (instead of applying a morph delta).

Anyway, back to the issue.  Let's assume:

  • we're trying to achieve morph-matching (not shape-matching)
  • Mesh A - has morphs that we want to apply to...
  • Mesh B - a similar but not matching shape (humanoid) is positioned in such a way that the features involved in the morph are lined up (by the human operator) as well as possible before running the script.

It seems to me that if you do only apply deltas in a way that means only some number of the vertices in Mesh B are moved (the ones that correlate to the vertices in Mesh A that are involved in the morph in question), then you can get reasonable results, because (organic) morphs in general tend to move multiple vertices in a similar direction and by a similar distance.

In other words, if the morph only moved a few vertices in a radical fashion, then unless you have a very strong correlation between Mesh A and Mesh B origin vertices, you'd get a poor result.  But if you have an approximate match up between the lips of Mesh A and Mesh B, and the morph was to "raise the center of the upper lip", then you'd likely get decent results, because the selection of vertices involved would be similar.  Of course depending on the match, you could get some vertices from the lower lip when you didn't want them and we've already mentioned the teeth being a problem.

Anyway, I think this helps explain why you and Joe are getting as good results as you are... the vertices may not be matched up exactly correctly, but all the vertices within the area around it are likely to move in the same/similar fashion with organic morphs.

So I think I've changed my opinion on the relative degree of acceptable results you're likely to achieve, but this still relies on the human positioning the meshes before running the script, as Joe was doing.  This also means that you should line up the ears when transfering ear morphs and the noses when transfering nose morphs.  If the ear morph is a "make lobe longer" morph, and the lobes are wildly dis-similar in shape and position, then you still have a problem (you should line up the lobes instead of the overall ear).

What I would recommend then is to let the user select which morphs to transfer.  They'd position the meshes so that the noses lined up (for example) and select one or more 'nose' morphs, and hit the 'go' button.  The script would then determine which vertices were involved for each morph and only move (or create deltas for) matching vertices (however that's determined) in the other mesh.  The user could then reposition the mesh so that the lips lined up and run the script again to create those, etc.

I think if you did it this way, along with some method of excluding groups of vertices/polys (I think excluding might work better than including), then you could get good results in most cases (worse results where the shapes were drastically different, and/or when the morph is very specifically defined to a small local region).

[continued in next post]...

Cinema4D Plugins (Home of Riptide, Riptide Pro, Undertow, Morph Mill, KyamaSlide and I/Ogre plugins) Poser products Freelance Modelling, Poser Rigging, UV-mapping work for hire.


Spanki ( ) posted Sun, 07 January 2007 at 12:20 PM

[continued]...

So, just to cover a few more issues... the methodology I'm proposing here means that the script ONLY looks at/uses the positions of the vertices in world space to determine how to make correlations between the two meshes.

It then creates the morphs in Mesh B by copying the morph deltas directly from Mesh A (adjusting the vertex indices), or by computing the weighted deltas as we were discussing earlier.  The point is that the script should completely ignore the world vertex positions once it's determined the correlations.

Once we have that in place, it now becomes possible for the user to, for example, use any existing morphs (or magnets) in either/both of the two meshes, to help achieve more similar shapes of the meshes before running the script.  They could even do what I did in that image above and spherify the meshes, if that helped.  Since the script doesn't rely on where the vertices are (except to make correlations), it doesn't matter what that actual shape of the mesh is, only that they are similar.

Cinema4D Plugins (Home of Riptide, Riptide Pro, Undertow, Morph Mill, KyamaSlide and I/Ogre plugins) Poser products Freelance Modelling, Poser Rigging, UV-mapping work for hire.


Privacy Notice

This site uses cookies to deliver the best experience. Our own cookies make user accounts and other features possible. Third-party cookies are used to display relevant ads and to analyze how Renderosity is used. By using our site, you acknowledge that you have read and understood our Terms of Service, including our Cookie Policy and our Privacy Policy.