Forum Coordinators: RedPhantom
Poser - OFFICIAL F.A.Q (Last Updated: 2024 Nov 28 11:20 am)
First I do not know anything of python and I do not mean to waste your time. This is just a suggestion of a possible solution using Poser.
Is it not possible to attach a prop to an untransformed "standard hand" to the position you want using python, apply the necessary scale and translation/rotation changes to the figure's IK and then grab the world coordinate position of the prop again so you can calculate the vector between it and the camera. If you want a vector for the direction the finger is pointing you should be able to get that from the relevent IK section or by adding a second prop along the original IK section.
This is based on pure spectulation, I hope it will provoke a more useful comment from someone who knows about Posers Python API.
whilst I don't know how to generate said database, bagginsbill et al. have mentioned and/or demonstrated generating depth-map of posersurface using p-node variable. as p(x,y,z) is point in space relative to origin(0,0,0) and orthoscopic front camera(0,0,z), p-node can be simply applied to show distance between posersurface and front camera.
Quote - whilst I don't know how to generate said database, bagginsbill et al. have mentioned and/or demonstrated generating depth-map of posersurface using p-node variable. as p(x,y,z) is point in space relative to origin(0,0,0) and orthoscopic front camera(0,0,z), p-node can be simply applied to show distance between posersurface and front camera.
Hehe, if i switch off my background in rendering, 3d modeling, and anything related for a moment, this reads like ancient hyroglyphics to me :-)
For the python: If you want to get a bunch of images for example from different angles or with different poses, it is probably easier to just make an animation, for example by rotating the camera around the object or rotate the object by 1 degree for each frame. When rendering animations, Poser can output a single image for each frame. Unless you already know either Poser or Python well, i would not recommend learning Poser and Python at the same time by starting to write poser scripts (too many things are likely to go wrong).
Attached Link: http://www.renderosity.com/mod/forumpro/media/folder_10/file_464897.jpg
oops, sorry about that, milli! it looks like above.iLucian
Re-reading my post, it doesn't look at all clear so will try again :S
If the problem is how do you identify the part of the finger you want to measure then a possible solution is to follow the motion capture technique of attaching something easy to identify to the part you are interested in. Poser can do something similar, it is very common for users to attach an object such as a ring to a character's finger. When the finger (or any part of the character) is moved Poser ensures the ring follows the finger. In the Poser user interface this is achieved by selecting an object and via it's property form and setting it's parent to the part you want to follow.
If the depth-map script doesn't allow you to select the part to measure, maybe combing this approach with the script will.
Good luck
"I would need some piece of software that can output a 3D model..."
Completely out of my depth (no pun intended) and showing it. So if the hand/camera etc. were set up, exporting the hand as a 3D model (.obj format) would allow you to extract the data? I think if you export it as a 'morph target,' it will essentially export a point cloud, thoug I may be, probably am wrong on that. You probably wouldn't want 100K models lying arounf, so I'm assuming some kind of export->analyze->delete pipeline. I'd be interested to know what goes in the database. It would be an interesting design. Perhaps if it's not proprietary, you could explain more about the ultimate use of this beast. Apologies if I've entirely misunderstood what you're looking for :-)
"Democracy is a pathetic belief in the collective wisdom of individual ignorance." - H. L. Mencken
The request for "the point cloud of the hand as seen from the camera or a depth map (each pixel)" would generate alot of data. XYZ co-ords for each point in the polygon. More so, if you want a more dense cloud. I am not altogether sure that storing that amount of data will be of any use, but I assume you already have a system to process the collected data afterwards.
OTOH, if you only want to be able to call up a pose, from a Library of poses; then Poser already provides that facility, of course.
OTOH, I may have entirely misunderstood what was asked.
"The request for "the point cloud of the hand as seen from the camera or a depth map (each pixel)" would generate alot of data."
That was my thought, but perhaps the intent is to manipulate the data and extract or transform it into something less bulky. If the whole enchilada is being stored, I guess I'd look at using some form of BLOB storage. Then again, 'database' may not refer to a typical SQL database which is what I'm familiar with. It does sound like a fascinating project though :-) Whether tis best suited to Poser, IDK.
"Democracy is a pathetic belief in the collective wisdom of individual ignorance." - H. L. Mencken
Hey! You guys are awesome!
I bookmarked this post on another computer and I haven't checked it for a while. Thanks for the answers.
Meanwhile I have been working hard on this project. What I did was color each finger segment (actor) differently, for the color image, and I used the P-node to get the depth information. I was thinking that with the 3D model I could do some 'manual' depth extraction, but that would have been overkill. In the future I may tag some poligons specifically for a better localization, like just the tip of the fingers.
The python script is rotating the object and rendering each frame as both color and depth. Scratch that :/ seems I got an error and it crashed it does that randomly sometimes. I don't know why, I will have to see about that.
The database will contain pairs of images: the color and depth ones, and I'll be using it for my MSc thesis: inferring hand position from depth images. :)
This site uses cookies to deliver the best experience. Our own cookies make user accounts and other features possible. Third-party cookies are used to display relevant ads and to analyze how Renderosity is used. By using our site, you acknowledge that you have read and understood our Terms of Service, including our Cookie Policy and our Privacy Policy.
Hello everybody!
First off, let me say that I have no background in rendering, 3D modeling, or anything related. So I came here to humbly ask the advice of experts.
What I ultimately need is a huge database (100,000+) of 3D models of hands from different types of people (young, old, male, female, skinny, chubby) as seen from different angles (front, top, bottom, left, right and all in between).
Since I couldn't possibly render all of these by hand, I would need some piece of software that can output a 3D model given these parameters: hand type, distance, camera position (or rotation of the hand), and of course rotation, side movement and twist for every joint. If the software has something that prevents unnatural hand positions (like fingers merging into one another or bent backwards) that would be even better. The data that I need from it is practically the point cloud of the hand as seen from the camera or a depth map (each pixel representing the distance from the camera to the surface of the model). As such, texture is mostly useless, though would be a good addition. Also, if I had the model of 3D vertices or triangles, I think I could work out a way to extract the data I need myself.
I found out that you could do this with Poser and Python, and I'm still learning how to set up the scene with the hand in a certain pose, but I don't see an easy/clear way of extracting the depth data. Can anyone help?
Looking forward to see your response.
Cheers,
Lucian