Forum Moderators: Staff
Poser Python Scripting F.A.Q (Last Updated: 2024 Dec 02 3:16 pm)
Okay.
I've managed to get the Laplacian smoothing working. I'm going to do some refinement tomorrow and hopefully add the exclusion/post-processing selection boxes. Then I'll have another release.
I removed the datafile global and I've hidden some menu options which are there for my own access while developing more than for user-level access.
What about the question of whether welds are causing dropped verts? Does Poser's welding screw up the normals or anything?
===========================sigline======================================================
Cage can be an opinionated jerk who posts without thinking. He apologizes for this. He's honestly not trying to be a turkeyhead.
Cage had some freebies, compatible with Poser 11 and below. His Python scripts were saved at archive.org, along with the rest of the Morphography site, where they were hosted.
Quote - Okay.
I've managed to get the Laplacian smoothing working. I'm going to do some refinement tomorrow and hopefully add the exclusion/post-processing selection boxes. Then I'll have another release.
Cool beans! I look forward to seeing that.
Quote - What about the question of whether welds are causing dropped verts? Does Poser's welding screw up the normals or anything?
That shouldn't be a factor... the normals are computed from the tripoly faces now, so it shouldn't affect anything.
Cinema4D Plugins (Home of Riptide, Riptide Pro, Undertow, Morph Mill, KyamaSlide and I/Ogre plugins) Poser products Freelance Modelling, Poser Rigging, UV-mapping work for hire.
...So effectively (as far as the script is concerned), the mesh is not welded and there are no polys on the other side of the split - the welding isn't visible to the script.
You seem reluctant to accept that minor (even infantessimal) differences in group-boundry-splits would be causing this (?), but - even with identical meshes - it was causing a problem, which is why the error-correction code went back in. This fixed identical mesh comparisons, but (in case you ask ,) expanding the range of that would not be a good general fix.
Keep in mind that the ray being cast out from those vertices (along thier normal) might be angling downwards (at a head/neck split, for example)... like where the back of the head slopes towards the body as it reaches the neck, so the polygons are 'leaning over', which causes thier normals to point downwards. This means that the ray will also shoot out in a slight downward angle, so it might pass right under the polygons of the other head mesh, even if they are split at the same place.
There's nothing wrong with this - it's expected behaivior. We shouldn't angle it back up or increase the error-correction value ot find the next closest vertex or artificially stretch the polys of the other mesh - those are all bad solutions in my opinion. The fix/solution is to start looking at the next actor in line, where an intersection can take place.
Having said that, comparing a head to an entire neck (or chest to an entire abdomen, etc) is probably overkill (we only need to look at the polys around the split area), so we could probably go back to un-mixed-boxes, or make a new min/max box out of the misses and expand that a little, but doing something like that would just be a speed optimization to cull out some of the new polygons of the next actor being checked.
Cinema4D Plugins (Home of Riptide, Riptide Pro, Undertow, Morph Mill, KyamaSlide and I/Ogre plugins) Poser products Freelance Modelling, Poser Rigging, UV-mapping work for hire.
*"You seem reluctant to accept that minor...."
*No, I don't mean to seem that way. Sorry. Last night I had some thoughts and I wanted to voice them. You've responded to them well, but where an idea was overlooked I sought clarification. I'm trying to learn what's a realistic concern and what isn't. You've helped me a lot with that, but, as you know, I'm not quite there yet. :-P So I keep asking.
*"There's nothing wrong with this - it's expected behaivior."
*That's kind of where I miss out, I think. A lot of expected behavior seems to really surprise me. Still learning. :-P Is there any other expected behavior that might set me off on tangents? So far we have the lumpy morphs and the body part edge thing. What other hypothetical cases could there be?
*"Cool beans! I look forward to seeing that."
*The simple Laplacian (I almost called it 'Lamarckian' - heh) smoothing being applied has at least one known issue. It can have trouble with concave polys, in which it ends up worsening the concavity or behaving in ways that might seem surprising. I saw evidence of this in the inner eye region with my test, after a couple of smoothing passes. So it will need some type of screening or restricted selection to be applied effectively.
There are better smoothing algorithms. I'm not sure whether I can figure them out, though. Once I have this set up, I'll look at the other examples I have. In the meantime, I have averaging and this....
*"The fix/solution is to start looking at the next actor in line, where an intersection can take place."
*What are the implications for the datafile format? Just so's I know what may be coming down the pike. I don't want to commit too fully to developing various ideas if basics may still change....
===========================sigline======================================================
Cage can be an opinionated jerk who posts without thinking. He apologizes for this. He's honestly not trying to be a turkeyhead.
Cage had some freebies, compatible with Poser 11 and below. His Python scripts were saved at archive.org, along with the rest of the Morphography site, where they were hosted.
Quote - *"You seem reluctant to accept that minor...."
*No, I don't mean to seem that way. Sorry. Last night I had some thoughts and I wanted to voice them. You've responded to them well, but where an idea was overlooked I sought clarification. I'm trying to learn what's a realistic concern and what isn't. You've helped me a lot with that, but, as you know, I'm not quite there yet. :-P So I keep asking.
Ok, understood :).
Quote - *"There's nothing wrong with this - it's expected behaivior."
*That's kind of where I miss out, I think. A lot of expected behavior seems to really surprise me. Still learning. :-P Is there any other expected behavior that might set me off on tangents? So far we have the lumpy morphs and the body part edge thing. What other hypothetical cases could there be?
Hmmm.. probably too hard to say, until we run across them. I'm normally pretty good about anticipating areas of potential confusion, but I do take some things for granted that might not always be clearly stated. I'm also generally pretty good with 'instinct' about whether or how something might work, but generally don't know the details until I sit down and think it through, so I'm not always prompt in explaining my reasoning behind something. In addition to both of those, is the fact that some of this stuff comes as a surprise to me too .
Quote - *"Cool beans! I look forward to seeing that."
*The simple Laplacian (I almost called it 'Lamarckian' - heh) smoothing being applied has at least one known issue. It can have trouble with concave polys, in which it ends up worsening the concavity or behaving in ways that might seem surprising. I saw evidence of this in the inner eye region with my test, after a couple of smoothing passes. So it will need some type of screening or restricted selection to be applied effectively.
There are better smoothing algorithms. I'm not sure whether I can figure them out, though. Once I have this set up, I'll look at the other examples I have. In the meantime, I have averaging and this....
If needed, we can come up with some screening for concave polygons (I assume you mean concave polygons and not non-planar polygons... right? Either way, we can screen for them, but I'd need to know which you mean).
Quote - *"The fix/solution is to start looking at the next actor in line, where an intersection can take place."
*What are the implications for the datafile format? Just so's I know what may be coming down the pike. I don't want to commit too fully to developing various ideas if basics may still change....
The only implications for the datafile format would be if we intended to merge multi-actor data into one file. In that case, I'd recommend what I proposed the other day.. new actors: records, separating sets of weight records. If we keep the files separate, then ther is no change that I know of.
Cinema4D Plugins (Home of Riptide, Riptide Pro, Undertow, Morph Mill, KyamaSlide and I/Ogre plugins) Poser products Freelance Modelling, Poser Rigging, UV-mapping work for hire.
Quote - If needed, we can come up with some screening for concave polygons (I assume you mean concave polygons and not non-planar polygons... right? Either way, we can screen for them, but I'd need to know which you mean).
I'll assume that you do mean concave polygons, because if you mean non-planar, then we're in trouble (a huge percentage of Poser mesh quads are non-planar).
To test for concavity, it's much like the old check_bounds() code.. First, you only need to check polygons with 4 or more vertices. Next, you walk around the vertices, computing a normal at each one (a cross-product of the two edges that end in that vertex), as you do this, you compare signs of the Z axis of that normal and fail if they differ (the Z axis will flip from positive to negative or vise-versa if the polygon is concave... if they are all positive or all negative, it's convex).
Cinema4D Plugins (Home of Riptide, Riptide Pro, Undertow, Morph Mill, KyamaSlide and I/Ogre plugins) Poser products Freelance Modelling, Poser Rigging, UV-mapping work for hire.
Uh... what's the difference between concave and non-planar? For that matter, what is a concave polygon? (And can there be a concave tri?) I was parroting something I'd read somewhere, which had a picture of the potential results, and the results I saw in the inner eye fit what their picture showed. But I didn't save the page or the link where I encountered the information.... D'oh. (Would making the non-planar polys planar serve as another smoothing alternative? Or would that just flatten things out, as you say will happen with the shape-matching? For that matter, once we have smooth, the problem with faceted shape-matching may be lessened....)
Yes, really - if I seem resistant to things, it's usually because I don't understand yet. Sometimes you get it and you're ready to move on, but I'm still fixating because I just haven't grasped why an idea is bad. I'm going to try to ask, rather than panic or obsess. But there's a reason I have the turkeyhead disclaimer in my sig-line, and this sort of behavior is pretty much it. Again, I apologize.
I'm going to look at a couple of additional screening methods, too. I downloaded the Wings source code, but I haven't been able to locate the code for the tighten function yet. Might be nice to be able to replicate something like that in PoserPython. One thing I should add to the Laplace is a check to preserve edges, places where polys have no neighbors. My current hard edge code can be adapted for that.
If we can avoid seriously complicating the data format, I think things will be better. Changing the format has implications for several existing functions, creating the opportunity to completely break things that already work....
===========================sigline======================================================
Cage can be an opinionated jerk who posts without thinking. He apologizes for this. He's honestly not trying to be a turkeyhead.
Cage had some freebies, compatible with Poser 11 and below. His Python scripts were saved at archive.org, along with the rest of the Morphography site, where they were hosted.
Well, fazzbazz. Rosity, I hate your time limits and your response editor. Hate to bite the hand that feeds me, but it's the truth. :(
A couple of errors in the above. Spanki, I noticed your explanation about concave polys, after I posted. Presumably there can't be concave tris? Or could those normals still point backwards, tris or no? Hmm. If tris can't be concave, the error I encountered has another cause, because I was working with our triangles.
*"I'm going to look at a couple of additional screening methods, too."
"Screening" in the above should read "smoothing".
===========================sigline======================================================
Cage can be an opinionated jerk who posts without thinking. He apologizes for this. He's honestly not trying to be a turkeyhead.
Cage had some freebies, compatible with Poser 11 and below. His Python scripts were saved at archive.org, along with the rest of the Morphography site, where they were hosted.
Basically, from a visual standpoint, you can imagine walking around the polygon from vertex to vertex, in either a clockwise or counter-clockwise direction (some apps use a right-handed system, some use a left-handed system). As you walk, you either turn right or left to get to the next vertex. On the left one (moving clockwise around), each new vertex is "to the right". But on the right one, most oof them are "to the right", but at that vertex that's pushed in, the angle to the next vertex is "to the left".
Concave polygons are generally frowned upon.. they can cause trouble with back-face culling and depending on the rendering engine, that 'cavity' might even get filled in.
Non-planar polygons also require more than 3 vertices. The basic definition of planar is if all vertices that make up the polygon lie in the same plane. Once you get above 3 vertices (needed to describe the plane to start with), one or more of them may not be in the same plane... then it's non-planar.
Non-planar polygons can cause surface Normal calculations to be off, because the surface doesn't have just one specific direction it faces - it depends on which triangle of the polygon you're looking at. Surface Normals are used for Vertex Normal generation, backface-culling, shading and lighting calculations and other things. So non-planar polygons will typically have some degree of shading/rendering artifacts. The system Poser uses is pretty forgiving when it comes to these and you'll find that most Poser meshes have tons of non-planar polygons.
Our script for the most part avoids any problems with non-planar or concave polygons, because it only deals with triangles. But, if your new smoothing stuff is working with our triangles, and it's re-positioning the vertices, then it might be creating concave polygons. I still hadn't looked at that code to see what it does, but my guess is that you'd want to feed it the original polys and not the tri-polys, so that it at least has some frame of reference of how the polygons are made up (it may well have code in place to keep from making concave polygons as it moves vertices around, but it has to know about all the vertices of the polygon for that to work).
Cinema4D Plugins (Home of Riptide, Riptide Pro, Undertow, Morph Mill, KyamaSlide and I/Ogre plugins) Poser products Freelance Modelling, Poser Rigging, UV-mapping work for hire.
I didn't get as much done today as I'd hoped. This update contains the smoothing, but the setup is still incomplete. This is primarily being posted so Spanki can take a look at the process....
===========================sigline======================================================
Cage can be an opinionated jerk who posts without thinking. He apologizes for this. He's honestly not trying to be a turkeyhead.
Cage had some freebies, compatible with Poser 11 and below. His Python scripts were saved at archive.org, along with the rest of the Morphography site, where they were hosted.
Heh...
gDisplacement[v[i]].addVec( g[v[i]].vecTo(g[v[n(i)]]) );
...damn, that's a horrid piece of C code - I'm surprised you were able to figure that out! :). Nice job.
This guy certainly has an 'interesting' coding style... he has long-winded, fairly descriptive function names like:
resetDisplacementsAndValence3D()
computegDisplacementsAndValence3D()
NormalizeDisplacements3D()
VolumeConservingSmoothingEdgeRelaxation()
etc.
...and at the same time, he has single-letter class member variables (and function names!!) like:
g,v,o,oo,cc,bm,bmc,D,V,UN,q,qq,m,mm
n(),p(),d2(),dm()
...sheesh.
Cinema4D Plugins (Home of Riptide, Riptide Pro, Undertow, Morph Mill, KyamaSlide and I/Ogre plugins) Poser products Freelance Modelling, Poser Rigging, UV-mapping work for hire.
Yes. The "easy" Laplace was hard to decipher. So I'm a bit worried about the more complex types he shows how to use. I'd like to get a version of his volume preserving smooth working....
Have you tested the smoothing functions in the scipt? Kind of disappointing, so far. None of it fixes my problem. I still get better results from combining the transferred morph with the transferred shape. So I need to experiement a bit.
Working on the exclusion and selection boxes, now. That should help with all of this.
===========================sigline======================================================
Cage can be an opinionated jerk who posts without thinking. He apologizes for this. He's honestly not trying to be a turkeyhead.
Cage had some freebies, compatible with Poser 11 and below. His Python scripts were saved at archive.org, along with the rest of the Morphography site, where they were hosted.
With the little time I had to spare, I ran a few tests with it and spent some time trying to figure out his C code, so I didn't spend a lot of time looking at results or even understanding what it's doing just yet.
Cinema4D Plugins (Home of Riptide, Riptide Pro, Undertow, Morph Mill, KyamaSlide and I/Ogre plugins) Poser products Freelance Modelling, Poser Rigging, UV-mapping work for hire.
The Laplace method I adapted basically moves clockwise around the tri (or poly) and moves each vertex toward the clockwise neighbor vertex. This approach has the problem with concave polys (apparently) and I've also noted that it has trouble with poles. Try running it on a morph created on the Poser ball prop. The pole ends of the ball seem to move toward +z or -z in my tests. The averaging handles this aspect much better.
To understand his code, I kind of cheated. I searched all the files for variable and function names and created a new document which contained only the pertinent code. The vector math functions and a couple of those single letter functions were tricky to figure out. It was made easier by the fact that his Z coordinates have more straightforward coding, which can also be applied to the X and Y. It you glance at his volume- and edge-preserving code, however, you'll see a huge, long function which looks quite intimidating....
Okay. I have multi-region selections. Moving on to exclusions....
===========================sigline======================================================
Cage can be an opinionated jerk who posts without thinking. He apologizes for this. He's honestly not trying to be a turkeyhead.
Cage had some freebies, compatible with Poser 11 and below. His Python scripts were saved at archive.org, along with the rest of the Morphography site, where they were hosted.
Now I move on to the handling of neighbor body parts, as we discussed. Do you have any preferred methods in mind for this, Spanki? My only thought is to keep the datafile format the same through the process.
===========================sigline======================================================
Cage can be an opinionated jerk who posts without thinking. He apologizes for this. He's honestly not trying to be a turkeyhead.
Cage had some freebies, compatible with Poser 11 and below. His Python scripts were saved at archive.org, along with the rest of the Morphography site, where they were hosted.
Now I'm really going to move on to the neighbor body parts.
Eh... Spanki - you still out there? Everything cool?
And, oh: it is Valentine's Day. Perhaps that deserves a "woot". :-P Have a decent one.
===========================sigline======================================================
Cage can be an opinionated jerk who posts without thinking. He apologizes for this. He's honestly not trying to be a turkeyhead.
Cage had some freebies, compatible with Poser 11 and below. His Python scripts were saved at archive.org, along with the rest of the Morphography site, where they were hosted.
Ahh, hey Cage. Yup - I'm still here, just tied up with other work atm.
Neighboring body parts... hmm... I don't recall off-hand if you can determine neighboring actors from Poser-Python... actually, I guess there is a way to GetParent() and/or GetChildren(), so you should be able to determine which actors are connected.
Of course the eyes are children of the head, and you wouldn't want to parse those for head matches, so I'm not sure how to detect that situation (avoiding hard-coded actor names, if possible). I think V4 also has lower-jaw and tongue as children of the head... I guess you'd want to parse those when doing teeth/gums and tongue matching, but not for the head skin. I don;t think you can rely on materials either, since the back of V4's noggin is SkinNeck or something (it's not in front of me.. maybe it's Scalp?). So in some cases you'll want to match between different materials.
Maybe the best course of action would be to run the test... at the end, look through the xpolys list to see if you have misses. If so, then check to see if they are on 'edge polys' (I think you have a way to look that up already). If so, then determine parent and children actors and just bring up a list, for the user to select from... let the user decide which other actors to match against.
Basically, I don't think I'd complicate things by trying to scan all potential actors at once (keeping track of various vertex index ordering, etc). I'd just do the first one, then if you determine that some edge vertices were missed, bring up a short list of additional actors for the user to select from and run additional scan(s) on that/those.
If you want to keep the file format the same, you just write out separate files for each pairing, and possibly add an "other_actors:" record (listing additional source actors, for this target) to the format to hint to the script that more than one file needs to be looked at when creating morphs.
I dunno... personally, I think I'd look into doing the combined file approach, with multiple groups of w records, separated by "actors:" records. I'd also move the vertex count totals fom the header record to the "actors:" record, for each pairing. Then when reading the file, you just build multiple tables and call the transfer_moprh() multiple times with different actors.
Sorry, these are just some random thoughts because you asked ... I don't really have the time to sit down and think it through yet.
Cinema4D Plugins (Home of Riptide, Riptide Pro, Undertow, Morph Mill, KyamaSlide and I/Ogre plugins) Poser products Freelance Modelling, Poser Rigging, UV-mapping work for hire.
If so, then check to see if they are on 'edge polys' ...
There may be some misses not on edge polys that could be caught by scanning another actor as well, but when that's the case, then there will be some edge poly vertices missed as well. In other words, if any of the missed verts are on edge polys, then set up to scan other actors.
One tricky issue is...
You don't really want to scan all the vertices where you already have valid hits again with the new actor, but you also don't want to write out the associated data again (they reference a different actor's poly/vertex indices). So, you'd probably want some other means of identifying vertices that have already been resolved (before calling linecast_loop() with a new actor, maybe set all non -1 values to -2 and change the appropriate "if xpolys[n] != -1" tests to just look for >= 0 for valid data.
Cinema4D Plugins (Home of Riptide, Riptide Pro, Undertow, Morph Mill, KyamaSlide and I/Ogre plugins) Poser products Freelance Modelling, Poser Rigging, UV-mapping work for hire.
...or better yet, keep the xpolys array around until you've built the vertex region list for the new actor... and when building that list, only include vertices where xpolys[n] == -1. Or something like that.
I'd also just use the bounding box of the target mesh (plus some expansion), since we only need to catch the polys of the new source mesh near the edge of the original target mesh.
Cinema4D Plugins (Home of Riptide, Riptide Pro, Undertow, Morph Mill, KyamaSlide and I/Ogre plugins) Poser products Freelance Modelling, Poser Rigging, UV-mapping work for hire.
Hmm. Hmm, hmm.
I'm wondering if it might work best to allow the user to make multiple selections in the source actor box, rather than trying to auto-scan each time for extra body parts. It also seems to me that we should use an unmerged bounding box, with the extra source actor as the basis, for these secondary comparisons. And the results will have to be logged to a different datafile than those of the base comparison, because we'll be transferring morphs from the secondary source to the target, but we should use some xpolys flag system to flag which verts are still unmatched.
Hmm.
===========================sigline======================================================
Cage can be an opinionated jerk who posts without thinking. He apologizes for this. He's honestly not trying to be a turkeyhead.
Cage had some freebies, compatible with Poser 11 and below. His Python scripts were saved at archive.org, along with the rest of the Morphography site, where they were hosted.
And, ah, in the meantime, here's an update with some refinements and corrections to the smoothing functions, and more namespace fixes.
Does anyone out there read Erlang code?
===========================sigline======================================================
Cage can be an opinionated jerk who posts without thinking. He apologizes for this. He's honestly not trying to be a turkeyhead.
Cage had some freebies, compatible with Poser 11 and below. His Python scripts were saved at archive.org, along with the rest of the Morphography site, where they were hosted.
Automating Multi-actor morph loading would be higher on my list than Multi-actor comparisons (!).. simply because not everyone who might use this (suite of) scripts would have the knowledge/patience/know-how or even 'need' to use the front-end side to do comparisons (assuming good correlation files were available for download or purchase). But pretty much anyone using the script(s) would be using them to transfer morphs.
I'm not sure what the issues are, but I suggested merging the info into one file because of the above. It makes sense to keep all the data relative to a particular bodypart transfer in one file, rather than having to track down some other file that may or may not exist or may have had it's name changed or whatever.
No help on erlang code, sorry.
Cinema4D Plugins (Home of Riptide, Riptide Pro, Undertow, Morph Mill, KyamaSlide and I/Ogre plugins) Poser products Freelance Modelling, Poser Rigging, UV-mapping work for hire.
*"It makes sense to keep all the data relative to a particular bodypart transfer in one file, rather than having to track down some other file that may or may not exist or may have had it's name changed or whatever."
*You're thinking bigger than I am, as usual. Such a change has implications for all of the read, write, and merge functions, requiring more conditional code for each. It has implications for the automated path handling, because we'll be putting more than one body part in a given folder. I'll need to think about that a lot before I can consider changing such things. Whereas keeping the file format and path structure the same really just means I need to look more at adding multi-actor looping to the comparison and morph-loading processes. Changing the file format introduces some changes which will have effects all over the script. If you really like the idea of merging the file formats, perhaps we should be discussing the datafile path handling and things like that. As long as we're thinking in terms of making things accessible to the average user, how can the filepath handling be improved? We'll still need all the same data available to read/write, and I assume the average user would prefer to have reading and writing automated for most cases. If our standard of valuation is now accessibility (rather than "sparseness" or whatever), maybe we should be re-considering several things from that perspective.
So I hesitate to just jump in and start making such big changes. Particularly since my changes tend to make things more complex and less accessible, unless I have a clean Master Plan outlined for me....
Hmm.
===========================sigline======================================================
Cage can be an opinionated jerk who posts without thinking. He apologizes for this. He's honestly not trying to be a turkeyhead.
Cage had some freebies, compatible with Poser 11 and below. His Python scripts were saved at archive.org, along with the rest of the Morphography site, where they were hosted.
I'm just thinking out-loud and on-the-fly with the above, due to the limited time I have right now. I do agree that such a change would have wide implications for other parts of the script and I also agree that it's a complex issue that should be given some thought.
There are several separate, but related issues...
1. multi-actor (or cross-actor or actor-spanning) correlation data generation
In most cases, in order to get a 'complete' dataset for a target actor, you're going to have to include comparisons to multiple source actors ('complete' does not necessarily rule out or contradict the 'sparse' nature of the file format, but instead refers to 'coverage area' of all the intended vertices - we may not care that the eyelashes are included, but we want to make sure all of the neck is).
Due to the nature of this process...
a) due to the way the polygons and vertices are indexed, each new source actor comparison will generate actor-specific (or actor-relative) data.
b) the second actor scanned should (could) only need to fill in 'missed' vertex info from the first actor.
c) the third (and each subsequent) actor scanned should (could) only need to fill in 'missed' vertex info from the previous actors scans.
d) the 'area' (region box) where these additional polygons are going to be found is in close proximity to the edge of the target actor mesh - ideally, a new bounding box, created from the currently missed vertices of the target mesh, inflated just enough to include a vertex of the polygon we're trying to hit out of the new source mesh - we probably don't need to look at ALL of the polygons of the new source mesh. This is just a speed optimization.
2. morph transfer, including the potential for multi-source-actor data for a given target actor
Some morphs (nose-wider, ears-pointy, etc) are localized and will not involve multiple source actors. Others (Muscular, Neck-wider, etc) will likely involve 2 but potentially any number of source actors (most parts have 2 other parts welded to them, some have one, the chest has 4, the hand has 6, etc.), depending on the area affected and how the bodypart groups are split between the two figures.
In either case, if cross-actor correlation data exists and the morph in question exists in the second (or third) actor, then that data should be used for the specified vertices to complete the morph transfer.
3. automating the above
Referring back to the above on the correlation data generation side of things, finding a solution that satisfies all of a,b,c,d at the same time (for an optimized automated process) is trickier than it seems on the surface (I'm betting on a "aha! - this is what he was getting at" moment sometime in your future ). I don't really have the time to (study, understand and then) explain better what I mean about this, other than to refer you back to my comments earlier about the xpolys issue.
Automating the morph transfer side of the process will involve...
a) access to the (actor-specific/relative) data generated ealier (filename / file-format / number and/or location of files issues)
b) accounting for those differences when generating deltas for the new morph (by that, I just mean that part of the morph will come from one source actor, using one set of correlation data and other part(s) of the morph will come from other source actor(s) using separate correlation data).
c) if transfering multiple morphs at once, then a and b are multiplied.
4. file storage of cross-actor / multi-actor / actor-spanning correlation data
Whether or not the cross-actor correlation process is automated, the data needs to be stored somewhere, in some manor and format that is prefferably deterministic, or at least user-specifiable, so it can be found later by the morph-transfer end of things.
Obviously file reading / writing / merging support routines will all be affected, but I see those as being written in support of whatever the infrastructure is determined to be, based on the other issues above.
...so those are some of the separate, but related issues involved. As stated earlier, the current script, using the current file format should be capable of doing multi-actor morph transfers - al beit in a manual, non-automated fashion. Which means that most of the above is really only relevent to trying to automate one end or the other of the process.
I'm afraid that I just don't have the time to help work this out currently, but I hope the above at least helps outline what some of the issues are. My only other additional comment / suggestion at this point would be to repeat what I said earlier... once you have good correlation data for two figures, the focus / interest / usefullness switches over to the morph-transfer end of things, so if I was looking to automate one side or the other, I'd favor that side.
Cinema4D Plugins (Home of Riptide, Riptide Pro, Undertow, Morph Mill, KyamaSlide and I/Ogre plugins) Poser products Freelance Modelling, Poser Rigging, UV-mapping work for hire.
Thanks. I'll think about all of this, seeking that ever-elusive "aha!" moment. :)
One new(?) idea I see in the above is transferring multiple morphs in one run. That, in itself, is interesting.
Overall, I'm still not sure I see the benefit of merging data for multiple actors into any given datafile. But I'll keep re-reading what you've posted.
My impulse would be to allow multiple source actor selections, rather than trying to scan the figure looking for appropriate parts. Then I would just loop through the selections, checking for morphs of the same name, looking up the new datafile for each set of correlated actors. This would require a bit more attention on the user's end, but would presumably be less code-heavy and less open to various flaws or errors. And since the user has a choice, it's more flexible than automatically checking all the actors which might be involved.
So I need to think about this. If I dive into something without having feeling that I have a good reason to be doing it, I'm likely to do a crummy job. Okay, a crummier job. :-P I plan to try to do some cross-figure body morph testing, similar to that you and JoePublic have done, but I don't seem to have great figures to use to approach this readily. I'll fire some up and see what looks good.
===========================sigline======================================================
Cage can be an opinionated jerk who posts without thinking. He apologizes for this. He's honestly not trying to be a turkeyhead.
Cage had some freebies, compatible with Poser 11 and below. His Python scripts were saved at archive.org, along with the rest of the Morphography site, where they were hosted.
Quote - My impulse would be to allow multiple source actor selections, rather than trying to scan the figure looking for appropriate parts. Then I would just loop through the selections, checking for morphs of the same name, looking up the new datafile for each set of correlated actors. This would require a bit more attention on the user's end, but would presumably be less code-heavy and less open to various flaws or errors. And since the user has a choice, it's more flexible than automatically checking all the actors which might be involved.
Works for me. Just allow multiple source actor selections, but only a single target.
One of my goals was to remove as much reliance on the filenaming as possible and consolidating target actor info into one file helps accomplish that, but your code already has mechanisms in place to look for files based on actor names, so it's not that important vs complications it might cause you to get the info merged - just go with what you're comfortable with.
Also, limiting the bounding box for cross-actor scans is just an optimization and may not be worth the trouble for non-head parts (which typically have far fewer polygons) anyway.
So, having said that, on the correlation end of things, my remaining suggestion would be for you to treat previous matches as 'screened' vertices on the target mesh. In other words, once you've written these out to the file, screen them out from subsequent passes.
I'm not looking at the code, so I can't talk in detail about the difference between 'screening' them out - BEFORE building the region list - vs just relying on the xpolys array, but that is the affect I'm after - previously matched vertices should no longer be included in the target vertex-region list for subsequent actor scans. The xpolys/xpoints arrays will have BAD data in it from the previous scan relative to new actor scans - they need to be rebuilt for each new actor.
Again, the above is just an optimization... there's no need to look for hits on vertices that have already been correlated in previous passes. However, it also shouldn't hurt anything, so the alternative is to 'reset all results data' for each new actor correlation pass (clear out, rebuild or re-initialize xpolys, xpoints, and any other 'results' type arrays).
For the morph transfer end of things, yes - allowing multiple morph transfers at once would really be handy!
Cinema4D Plugins (Home of Riptide, Riptide Pro, Undertow, Morph Mill, KyamaSlide and I/Ogre plugins) Poser products Freelance Modelling, Poser Rigging, UV-mapping work for hire.
*Well, you did kind of say that if I was okay with the datapath handling, you were okay. So I kind of built a few things on it. cough
But I'm really most interested in doing this the best way, not necessarily the easiest. Sometimes these overlap. I think this can be a nice Poser utility. No point in letting Cage kludge it up too much, even if he started the darned thing. :) So don't cave in to me on something like this if you really think a revision to datafile/path handling would be a serious improvement. But don't be surprised when it takes me a while to build myself up to changing it, either.... :-P
Guess what? Guess what? It's really cool. It's neat. The UV transfer works pretty much as I anticipated. Check out the attached image. There are some flaws to be worked out (the current form can't do anything with UV seams, and it hangs when I try to test with the Vickies), but it passed the two balls test with flying colors. Neat-o.
===========================sigline======================================================
Cage can be an opinionated jerk who posts without thinking. He apologizes for this. He's honestly not trying to be a turkeyhead.
Cage had some freebies, compatible with Poser 11 and below. His Python scripts were saved at archive.org, along with the rest of the Morphography site, where they were hosted.
To tell you the truth, I had been using the non-path options and hadn't really paid attention to how you were doing the path stuff (didn't have/take the time to figure out how to enable it) .
As I said, it's no big deal. I was trying to account for other apps (ahem) creating these files, where other tools might be utilized, but where the logic and info needed to create the specific naming and filepaths may not be available (no .cr2 loaded, so no 'actor' info available, etc). But at the time, it was still taking an hour or 3 to do the correlations in Poser, so it's just not as important anymore.
Now that I see the structure, my one comment would be that the files are 'target' related info, but you appear to list the 'source' in each case for file/folder names... no biggie though (I think I probably used a similar order on some of the record info).
The UV stuff sounds cool - congrats!
Cinema4D Plugins (Home of Riptide, Riptide Pro, Undertow, Morph Mill, KyamaSlide and I/Ogre plugins) Poser products Freelance Modelling, Poser Rigging, UV-mapping work for hire.
...strike that.. two comments... the second is that I would have used 'internal' names of the actors in the "actors:" record... the display names can have spaces in them (difficult to parse). ie.
actors: lShin lThigh
instead of
actors: Left Shin Left Thigh
...the second one looks like 4 actors :).
Cinema4D Plugins (Home of Riptide, Riptide Pro, Undertow, Morph Mill, KyamaSlide and I/Ogre plugins) Poser products Freelance Modelling, Poser Rigging, UV-mapping work for hire.
Hey, this is cool. Spanki, sir, your weighting technique is the greatest! :)
The script was slow because PoserPython provides no TexVertex() object, so I had to do a TexVertices() lookup for every texvertex. I pre-compiled a texvertices list for lookups, and even V1-V3 runs fast, now. I'll have to see what I can do about the seams. This means that given a well-correlated datafile, morphs and UVs can be passed easily. How cool.
===========================sigline======================================================
Cage can be an opinionated jerk who posts without thinking. He apologizes for this. He's honestly not trying to be a turkeyhead.
Cage had some freebies, compatible with Poser 11 and below. His Python scripts were saved at archive.org, along with the rest of the Morphography site, where they were hosted.
So the naming should have underscores instead of spaces. Good. I used external names instead of internal because I hoped that would make it easier for the average user. The internal names will be constant, but the user, if (s)he is not a .cr2 hacker, will be more familiar with the name seen within Poser. These things are comparatively easy to change.
The process is target-driven and the files are organized by target, but what we're really logging in the datafile is a specific comparison set which requires that both source and target be correct in order to use the datafile for anything. I don't see how we could organize the whole thing without using both...?
===========================sigline======================================================
Cage can be an opinionated jerk who posts without thinking. He apologizes for this. He's honestly not trying to be a turkeyhead.
Cage had some freebies, compatible with Poser 11 and below. His Python scripts were saved at archive.org, along with the rest of the Morphography site, where they were hosted.
Don't worry about the naming thing (and I didn't mean not to use both, I meant maybe list the target first) - it's not important.
Nice work on the UV thing! I'll be interested to see what you did on that when I get some free time.
Cinema4D Plugins (Home of Riptide, Riptide Pro, Undertow, Morph Mill, KyamaSlide and I/Ogre plugins) Poser products Freelance Modelling, Poser Rigging, UV-mapping work for hire.
Quote - but you appear to list the 'source' first in each case for file/folder names...
Sorry, I left out a word, changing the meaning
Cinema4D Plugins (Home of Riptide, Riptide Pro, Undertow, Morph Mill, KyamaSlide and I/Ogre plugins) Poser products Freelance Modelling, Poser Rigging, UV-mapping work for hire.
Most of the UV stuff is just organizing the texvert data for processing. The actual conversion is almost exactly like the morph transfer. The new UV position is the weighted sum of the UV positions of the texvertices from the source tri. Pretty simple. I tried really hard to make it more complicated, as is my wont, but the correct answer was what I had guessed in the first place. It's so rare that I'm right. :-P Case in point: I've been posting a broken version of Laplace smoothing. I've now fixed it, but I was wrongo-bongo. I have an idea for catching the UV seams. I'll try to patch that in tomorrow and then post an update.
*"Sorry, I left out a word, changing the meaning "
*Yeh, that does change the meaning. Swapping the order around is another easy point. I've been thinking about all of this as source-target correlations, which is why I wrote them in that order. I should be able to change these three points you cite without any ill effect.
===========================sigline======================================================
Cage can be an opinionated jerk who posts without thinking. He apologizes for this. He's honestly not trying to be a turkeyhead.
Cage had some freebies, compatible with Poser 11 and below. His Python scripts were saved at archive.org, along with the rest of the Morphography site, where they were hosted.
Quote - Case in point: I've been posting a broken version of Laplace smoothing.
...I'm guessing that you realized what the normalization was supposed to be doing and re-implemented that?
On the source/target thing.. yeah, that's the logical way to think about it, but the data itself is target-relative (flip_weights() flips things around to source-relative after reading it in to make the transfer easier). I think I set up the header and actors records to be source/target as well, so it might be confusing to change the naming at this point, unless you change them all around.
Cinema4D Plugins (Home of Riptide, Riptide Pro, Undertow, Morph Mill, KyamaSlide and I/Ogre plugins) Poser products Freelance Modelling, Poser Rigging, UV-mapping work for hire.
Quote - On the source/target thing.. yeah, that's the logical way to think about it, but the data itself is target-relative (flip_weights() flips things around to source-relative after reading it in to make the transfer easier). I think I set up the header and actors records to be source/target as well, so it might be confusing to change the naming at this point, unless you change them all around.
In fact, I think I'd just leave them as-is. As long as the order used for folder names and the order used for the header and actor records are the same, that's fine. People are more likely to think in terms of source/destination than the other way around (as proven by both me and you already :) ).
So, the only change needed is to either use internal actor names, or add underscores (or some other non-space character).
Cinema4D Plugins (Home of Riptide, Riptide Pro, Undertow, Morph Mill, KyamaSlide and I/Ogre plugins) Poser products Freelance Modelling, Poser Rigging, UV-mapping work for hire.
Underscores it is, then. And you're right about the Laplace. I didn't grasp the fact that averaging is going to have to be a part of the smoothing process, probably in any method, until I worked out the Wings tighten method. Which ends up being almost identical to the Laplace....
Handling seams for UV transfer is tricky. There's no built-in frame of reference, and PoserPython offers no tools for splitting or welding texvertices. So I've had to start by building a series of functions to create a copy mesh using the Poser Numeric array geometry creation methods. So far so good, although I have to test this for speed. It looks like a feasible seams handling method may need to do a 2d in-bouonds check of target UVs against source UVs. But I'm still muddling through that....
===========================sigline======================================================
Cage can be an opinionated jerk who posts without thinking. He apologizes for this. He's honestly not trying to be a turkeyhead.
Cage had some freebies, compatible with Poser 11 and below. His Python scripts were saved at archive.org, along with the rest of the Morphography site, where they were hosted.
The geometry copy method I'm trying to use introduces a pretty significant RAM leak, which I haven't managed to plug yet. So the copying is disabled here. Change the usecopy variable to 1 in line 1234 to re-enable it. Otherwise, this will change the UVs in the target actor directly.
This also fixes Laplace smoothing although, as I mentioned, it ends up almost identical to tighten. I hope to try to replace the Laplace with Bi-Laplace or volume-preserving smoothing.
And nothing yet on the multi-actor matter. Sorry. Cart before the horse. :-P
===========================sigline======================================================
Cage can be an opinionated jerk who posts without thinking. He apologizes for this. He's honestly not trying to be a turkeyhead.
Cage had some freebies, compatible with Poser 11 and below. His Python scripts were saved at archive.org, along with the rest of the Morphography site, where they were hosted.
Cool - I wish I had more time to play around with this, but I'll try to sneak in a little time to see the new stuff when I can.
BTW, I see that you added underscores to actor names for filename purposes, which is fine, but my concern was actually with the "actors"" record, inside the file :).
One suggestion... it looks like this thing is adding delta values when there shouldn't be (ie. delta = 0). I think this is just the old test-float-against-absolute-zero problem... transfer_morph() line 2788...
if tmorph[0] != 0.0 or tmorph[1] != 0.0 or tmorph[2] != 0.0:
should be changed to:
if abs(tmorph[0]) >FLT_EPSILON or abs(tmorph[1]) > FLT_EPSILON or abs(tmorph[2]) > FLT_EPSILON:
...there's a similar test above there...
if( deltaX or deltaY or deltaZ ):
...that could be changed as well, but I wouldn't worry too much about that one - changing the other one will catch it at the point where the morph deltas set up.
WIth the new UV stuff in there... does this script do anything to the UVs of the target if I'm just using the script to transfer morphs? (I hope not, but wanted to make sure ).
Cinema4D Plugins (Home of Riptide, Riptide Pro, Undertow, Morph Mill, KyamaSlide and I/Ogre plugins) Poser products Freelance Modelling, Poser Rigging, UV-mapping work for hire.
Aha! One of my notes, a week or so ago, speculated about whether we needed the FLT_EPSILON test for morph zero checks. I copied my use of that form in the smoothing from the morph transfer process. I'll change them all. Thank you.
The actors statement inside the datafile. I'll fix that, too. I've considered the notes inside the datafile to be your realm, so I've hardly looked at that. :-P
The UV transfer is only activated when a separate menu option is selected. There is no overlap between the various functions, that way. I'm not kludging it up that much. :)
I'm kind of veering toward writing out an OBJ file (how do you pronounce "OBJ"? I spell it out by letters, O-B-J, so I put the "an" in front...) for a UV altered mesh. I think this should avoid all the RAM leak business, which seems to come from trying to copy the materials zones to the newly created mesh. There are two problems right now with this. One is splitting and welding the texverts for the new layout. The other is trying to move the seams into corrected positions. I suspect that I will only be able to resolve the latter problem approximately, so the OBJ file will require some hand-correction after processing. But most of it should be correct, in most cases, given a good correlation datafile.
===========================sigline======================================================
Cage can be an opinionated jerk who posts without thinking. He apologizes for this. He's honestly not trying to be a turkeyhead.
Cage had some freebies, compatible with Poser 11 and below. His Python scripts were saved at archive.org, along with the rest of the Morphography site, where they were hosted.
Hokay. I've fixed everything you specified.
I'm still struggling with UV seams, though. To be able to split and weld texverts, which is necessary to approach conversion effectively, I have to work with a copy of the target geometry. Simply making a new prop from the target geom won't work, because that still suffers the texvert limitations, and any changes to the new prop via Python get carried back to the original actor, because of Python's object linking. So I can either create a new geometry using the Numeric stuff and NewGeometry(), or work with OBJ files.
The NewGeometry approach is fairly fast, but I can't seem to avoid a RAM leak in the process. With V1-V3, I invariably have a loss of at least 20 MB of memory. As far as I can tell, this is just an effect of using the PoserPython geometry creation methods with such a large mesh.
With the OBJ approach, I can either write my own OBJ file, patching in the new vt and f line data, or I can export an OBJ, then read it back and patch in the changes. The first approach seems a lot like recreating the wheel, and I'd have to do a lot of extra organizing to maintain the groups and materials assignments (not to mention the normals). So I've tried the latter method. This works, but it's slow and the RAM takes an even larger hit while processing the OBJ. It freed up at the end, but while processing I lost 150 MB of RAM, and the process took 20 minutes. (And don't get me started on the PPy documentation. The exportCallback option is wholly undocumented. Sheesh.) I can presumably remedy the RAM hit somewhat by using a buffer with the read-write process, but I'm not sure that will help at all with the speed.
All in all, I have to say this exposes a whole new area where PoserPython was just not well thought out. It should allow access to all sorts of geometry data and methods which are simply omitted. They've only given us bare-bones access to most things. Frustrating. It's a poor craftsman who blames his tools, blah blah, etc. But, jeez. The tools could use some work.
But anyway. I'm working on it. I hope to have another update tomorrow, which should at least clean up the UV transfer enough to make it fairly easy to clean up the seams in UV Mapper.
===========================sigline======================================================
Cage can be an opinionated jerk who posts without thinking. He apologizes for this. He's honestly not trying to be a turkeyhead.
Cage had some freebies, compatible with Poser 11 and below. His Python scripts were saved at archive.org, along with the rest of the Morphography site, where they were hosted.
Okay. I think I've plugged the leak with the OBJ export method, and I've sped it up for larger meshes. It still uses a lot of RAM while processing, but not as much as before, and it releases it afterwards.
A couple of random notes. It looks like a list of actors may need to be explicitly cleared once you're done with it. The Export callback option for the import-export object seems to use OnOff() to control what gets exported, but the implementation of such a callback is dreadfully awward (can't accept arguments, can't be part of a class, pretty much requires that the actor screening ref be passed as a global variable) and one is better of just writing a function from scratch to toggle OnOff() for the various actors.
So I'll work on splitting and welding texverts and try to get this out tonight....
===========================sigline======================================================
Cage can be an opinionated jerk who posts without thinking. He apologizes for this. He's honestly not trying to be a turkeyhead.
Cage had some freebies, compatible with Poser 11 and below. His Python scripts were saved at archive.org, along with the rest of the Morphography site, where they were hosted.
Sounds good. The UV stuff is cool and I'm sure it'll come in handy in the future. I'm just waiting on the cross-actor stuff :).
I've successfully used the script to manually transfer some morphs in some clothing I made recently, but doing multiple source actors manually takes a lot of time. If I could transfer multiple morphs at once (should be pretty straight-forward, code-wise), that would help, but I'll probably just wait on the multi-actor stuff.
Cinema4D Plugins (Home of Riptide, Riptide Pro, Undertow, Morph Mill, KyamaSlide and I/Ogre plugins) Poser products Freelance Modelling, Poser Rigging, UV-mapping work for hire.
So am I getting my priorities wrong? Umm. I started the UV stuff to fill the time while we were determining what should be done for the multi-actors. Now I'm trying to get some closure on the UVs before I move on. Cart, horse, etc. I think I tend to let the cart pull my horse most of the time, though. :-P
I'm finding the target texpolys which need to have split seams now, but I think I can improve the process. The object export-edit also needs handling for cases when the exported object doesn't have vt lines for me to alter. So I still have a few things to clean up.
Lemme look at the transfer morphs looping idea. I'll try to do it without screwing up your code too much. I think the looping should probably be at the GUI level and the existing function should just be called repeatedly. Maybe I can work that in today, too.
Sorry for the delays. I'm learning some interesting things, though....
===========================sigline======================================================
Cage can be an opinionated jerk who posts without thinking. He apologizes for this. He's honestly not trying to be a turkeyhead.
Cage had some freebies, compatible with Poser 11 and below. His Python scripts were saved at archive.org, along with the rest of the Morphography site, where they were hosted.
Okay. I need some feedback before I proceed with multiple morph transfers. We need some information before we start. We need source actors, target actors, and morph names. The most complex way of doing this would require a lot of user-specification of correlations to use and morphs to use, and would require either massive re-design (and complication) of the GUI or a series of popups throughout the process to allow user-specification at each step. Either way, that seems less a convenience for the user (and the programmer) than a hassle and a complication.
To automate this, we need to make some assumptions. I would say we should walk the figure-figure correlation folder path, checking each actor-actor subfolder for a "full.vwt" datafile, then transferring the morph using that for each correlation set, IF the selected morph exists in that specific actor. This requires at least two things: a proper TDMT folder path for the actors, and a "full.vwt" for them. Presumably this is driven by one single morph selection for each run. If not, we need a device to select multiple morphs.
So making this GUI-driven is not easy or, presumably, accessible to the user. To add these selection capabilities to the core GUI would make the other functions confusing to set up on the user-level. But automating it is less flexible and rather restrictive.
So I need some input before I proceed. What do you have in mind for this? What parameters would you accept, or prefer? Do you want to have to stop mid-process to keep making new selections, or do you want to press a button and have it go? Do you want to handle one morph at a time, or several morphs? We need to discuss this, I think. And this may lead back tothe questions about datafile and datapath design and handling.
Whaddaya think?
===========================sigline======================================================
Cage can be an opinionated jerk who posts without thinking. He apologizes for this. He's honestly not trying to be a turkeyhead.
Cage had some freebies, compatible with Poser 11 and below. His Python scripts were saved at archive.org, along with the rest of the Morphography site, where they were hosted.
Ok, couple of thoughts...
First off, it's your baby, so feel free to work on whatever floats your boat :). I'm still pretty busy with freelance work, so I'll help when I can, but for the time-being, I just don't have any free time. It just happens that some of the work I have could benefit from this script, if/when it could do more than one morph at a time and secondly, across multiple actors.
So having said that, my idea of multiple morphs, with only having to select source and target figures, an actor on each and multiple morphs is... well, that :). As far as I know, this would only require allowing multiple selections on the morph list/column/scroller thingie, and then calling transfer_morph() repeatedly, with a new morph name each time (with the same source and target actors).
Currently, you can only transfer one morph. After that, all the GUI lists are cleared and you have to select the source figure, select the target figure, select the source actor (which may require scrolling down the list first), select the target actor (which may require scrolling down the other list), selecting the morph (which may require scrolling THAT list) and clicking GO again. When you multiply that times a couple dozen morphs... tedium sets in fast .
So, just to clarify - I'm talking about multiple different morphs. Smile, Nose Big, Cheeks Poofy, in one go.
Next, the other part of that is handling multi-actor correlations. This is a separate, but potentially related issue. The best way to handle multi-actor correlations is probably by allowing multiple source actors to be selected, but only one target actor. Correlations are then run between that target actor and the list of source actors.
If (or as long as) it's still all in one script and the user has selected multiple morphs, then those morphs are relative to source actor with the same name as the target actor selected. Ideally, when just transfering morphs, they'd only need to select one actor on each.... the (pre-generated) data file would let the script know that it needed to load other cross-actor correlation files to satisfy all vertices of this target.
So the interface really only needs to change to allow multiple morphs to be selected, and once multi-actor correlation code is in place, multiple source actors. The only thing complicating things is that fact that it's still all one script and you have code that does the correlation if the file(s) can't be found. If it was two scripts, the morph transfer one would just throw up it's hands and tell the silly user to go run the correlation script first. And the correlation script wouldn't care about morphs at all (oh, the tangled web we weave... :)).
Cinema4D Plugins (Home of Riptide, Riptide Pro, Undertow, Morph Mill, KyamaSlide and I/Ogre plugins) Poser products Freelance Modelling, Poser Rigging, UV-mapping work for hire.
....a third related, but separate issue sounds more like what you discussed above... selecting multiple targets, multiple sources and multiple morphs, click GO and go eat some cookies while the script runs. That would be cool, but I would settle for doing multiple morphs on one target actor at a time for now :).
Cinema4D Plugins (Home of Riptide, Riptide Pro, Undertow, Morph Mill, KyamaSlide and I/Ogre plugins) Poser products Freelance Modelling, Poser Rigging, UV-mapping work for hire.
*"First off, it's your baby, so feel free to work on whatever floats your boat :). I'm still pretty busy with freelance work, so I'll help when I can, but for the time-being, I just don't have any free time. It just happens that some of the work I have could benefit from this script, if/when it could do more than one morph at a time and secondly, across multiple actors."
*Gotcha. You seemed to be requesting something with an idea in mind, and it turned out that I wasn't quite sure what the idea was. :-P I got confused, and ended up thinking more in terms of transferring something like a full-body morph across the whole figure. Which would require a completely different setup.
The main complication with adding multiple actor support is determining which actor selected gets to have its morphs displayed in the morph listbox. With limited space, we're going to have to force the user to follow a certain order or actor selection. I would say the first selection should list its morphs.
But that's your secondary point. Once again, my cart outpaces my horse. I think this multiple morph capability would be better accomodated as part of a standalone morph transfer app, such as you've suggested. It's easy enough to implement, but once we start allowing multiple morphs and multiple actors and what-all, I'm in a tangled mess for prioritizing some of it. The main run function multitasks to handle comparisons, morph transfers, and shape transfers. And it already has to look up and reset some but not all variables in each case. I'm starting to think about simplification and division of the script, lately. The main problem with simplification/division is that then a zip file will have to be distributed and we'll need to post the WIP scripts somewhere and link to them on the forum. I can start putting them up on my site, I suppose.
I'll go ahead and add multiple morph selections, anyway. That's fairly simple. And I'll add splitting off the separate transfer script to my list. Once that's split off, it should probably inherit the multi-morph capability and leave only the basic function for the primary script.
I've got all the data I need now for handling the UV seams. It's mainly a matter of putting the pieces together now, then debugging, if my process continues to work out. I don't think this will end up 100% correct, due to the way that seam splits won't line up at edges between the two meshes. So some hand-correction should be needed in places, but it looks like this should at least move texverts so they aren't an overlapping mess, making hand-editing at least feasible.... I suppose I could try to implement another post-processing step to run point-line collisions on the target UVs to the source UVs, to correct the seam edge positions. Hmm.
===========================sigline======================================================
Cage can be an opinionated jerk who posts without thinking. He apologizes for this. He's honestly not trying to be a turkeyhead.
Cage had some freebies, compatible with Poser 11 and below. His Python scripts were saved at archive.org, along with the rest of the Morphography site, where they were hosted.
This site uses cookies to deliver the best experience. Our own cookies make user accounts and other features possible. Third-party cookies are used to display relevant ads and to analyze how Renderosity is used. By using our site, you acknowledge that you have read and understood our Terms of Service, including our Cookie Policy and our Privacy Policy.
Unless you are testing two identical meshes, then chances are near 100% that the splits between groups will be different enough to miss some verts (depending on whether you are comparing A to B or B to A). A closest-vertex fix would be a bad thing. Most of the back-half of V4's head is actually her 'neck', for example. If you were trying to correlate V3 to V4's head, all those missing verts on the back-side would pick out verts that were very far away from where they should be. The correct polys to match against are the ones in the 'neck'.
Yes, it would - see above .
My guess? 100%
I suggested that as a work-around, but also as a means of verifying that that process works. If it doesn't work manually, it won't work in an automated way either. Verifying it manually might tell us something about what needs to happen in the script.
Cinema4D Plugins (Home of Riptide, Riptide Pro, Undertow, Morph Mill, KyamaSlide and I/Ogre plugins) Poser products Freelance Modelling, Poser Rigging, UV-mapping work for hire.