Sat, Jan 11, 11:42 PM CST

Renderosity Forums / Poser - OFFICIAL



Welcome to the Poser - OFFICIAL Forum

Forum Coordinators: RedPhantom

Poser - OFFICIAL F.A.Q (Last Updated: 2025 Jan 11 12:18 am)



Subject: ok so this is what needs to be invented!


originalplaid ( ) posted Sun, 26 December 2004 at 9:51 PM · edited Sat, 11 January 2025 at 11:41 PM

After playing with animation, investigating .bvh files and motion capture etc. I have decided something that is sorely missing from Poser. We need a small manniquin (similar to this http://i23.ebayimg.com/01/i/01/7b/7c/f3_2.JPG that Ikea sells) that has motion sensors. Pose it on the figure and the program takes it's cues from there. Be a lot easier and cheaper then motion capture hardware. Any one else think this is a good idea? I know it takes alot of the skill and craft out of tweaking the key frames and dials, but I want to make animated shorts and look at poser as a way to aide in my storytelling, not so much as a way be artistic neccessarily. Does that make sense?


ockham ( ) posted Sun, 26 December 2004 at 10:40 PM

It's an interesting idea. Sort of a humanoid joystick. Hmm.... Instead of motion sensors, how about colored LEDs on main joints, sensed by a trio of cheap webcams, front/left/top... It would be fun at any rate.

My python page
My ShareCG freebies


originalplaid ( ) posted Sun, 26 December 2004 at 10:47 PM

I thought about that then I realized that if you had little gears or something mechanical instead of optical it could be more precise and probably cheaper. Like the computer knows that a complete roation of the arm takes 42 clicks or something on the shoulder gear and could figure out position from that. This would be more just for the posing of a character, and not the spatial location of a figure. Think about it, if it was say 60ish bucks you could get two and pose Vickie and Mike in a cafe someplace and they could be having discussions with the assistance of Mimic 3 in no time! throw in some vue trees etc... it would be a quick and dirty start that you could go back and tweak til perfection at a later time.


kuroyume0161 ( ) posted Sun, 26 December 2004 at 11:00 PM

Not a bad idea considering the costs of either buying/renting motion capture equipment or 'booking' a business to do it for you. What's funny is that Poser's initial idea was to take that real posable mannequin and put it into a computer so you didn't need the real thing. But it would be much faster for posing, what you suggest. For stills, I think two or three orthographic photos of a posed subject would work for most situations (except where dynamics of movement disallow 'posing'). My question, from some robotics experience, is how much would it cost to sensor a human mannequin? There are over 60 joints in the body (can't seem to find a definite figure here) including phalanges and not including the spinal column. Some of these joints are ball-and-socket joints which have a free range of motion - the best that I can think of is a joystick that would be needed for these (shoulders and hips). Others are pivot joints that twist. There are seven types of joint on the body. There would definitely need to be some sort of calibration to put the mannequin into a 'zero-pose' that matches the destination figure's before going about your business. Just playing devil's advocate and making you consider those sorted details (because my boss is in them). >;)

C makes it easy to shoot yourself in the foot. C++ makes it harder, but when you do, you blow your whole leg off.

 -- Bjarne Stroustrup

Contact Me | Kuroyume's DevelopmentZone


kuroyume0161 ( ) posted Sun, 26 December 2004 at 11:02 PM

I don't think simple gearing will allow the multi-axis rotation culmulations required.

C makes it easy to shoot yourself in the foot. C++ makes it harder, but when you do, you blow your whole leg off.

 -- Bjarne Stroustrup

Contact Me | Kuroyume's DevelopmentZone


ockham ( ) posted Sun, 26 December 2004 at 11:13 PM

file_160796.jpg

Here's what I was imagining. I suspect 'clicks' would get way too complicated mechanically, though it would certainly make digital pickup easy. How about PVDF strain sensors? The material is cheap and can be rigged easily to sense linear strain... not so easily for twist, but I'll bet there's a tricky way to get there.

My python page
My ShareCG freebies


originalplaid ( ) posted Sun, 26 December 2004 at 11:23 PM

Would you have to consider the real number of joints in the human body or just the "lthigh" "rshin" "neck" etc. type that coorespond to the poser dials? It might have to be a seperate program outside of poser, one that would calculate all the data from the postions of the body from the inital pose and then set that at "default" position. I don't think this physical figure thing would be good for trying to do "motions", more just setting the keyframes and letting poser do the work. If you were doing it step by step stop action style, one little movement at a time, your basically duplicating what you could already do with more accuracy in poser with the dials.


originalplaid ( ) posted Sun, 26 December 2004 at 11:26 PM

Looking at the picture above i just had the strangest image of John Cusak in "Being John Malkivitch" and his marionettes! A setp up described as above with a great puppeteer would be unstoppable - Jim Henson's company would absolutely rock with something like that. I think the word "gracefull" would become the key phrase in reviews of animated movies done that way.


ockham ( ) posted Sun, 26 December 2004 at 11:36 PM

file_160798.jpg

It would certainly have to be just the main joints. Fingers would be way too wild! Strain sensors could be placed roughly where the main opposing muscles are in a real body. For instance, three sensors placed like this would give enough info for the neck.

My python page
My ShareCG freebies


originalplaid ( ) posted Sun, 26 December 2004 at 11:51 PM

Guess curious labs wouldn't be too into this uh? Who's got a EE degree and nothing but free time? The other reason I would like something like this is to avoid body parts ramming through other body parts... I keep getting fingers poking through the other arm or legs that melt together when I use a pose that crosses the leg. Pretty much I suck at posing is my problem. I love the bvh files I found on the web but wish I could make my own. Does anyone use the visual tracker program in the market place?


operaguy ( ) posted Mon, 27 December 2004 at 12:11 AM · edited Mon, 27 December 2004 at 12:21 AM

I don't see how this could work for animation. The whole point of motion capture is that the human actor moves thru space with infinite grace and subtle declination of all important joints, muscles, etc. How would you "move" the manniquin?

As originalplaid implied above, it might be really spiffy for single-frame poses.

::::: Opera :::::

I have been investigating small-scale motion capture. You can put together a facial-only system for around $2000 and use your own camcorder, proFace(video) and get full body wireless capture (Gypsy) for around $20,000 with a 1/2 mile radius! That sounds like a lot, but those prices are much much lower than just a few years ago.

Click here for a good link. (opens in new window)

Message edited on: 12/27/2004 00:21


lmckenzie ( ) posted Mon, 27 December 2004 at 12:21 AM

Actually, this isn't a new idea. I thought of it a couple of years ago and posted it here only to find that someone had thought of it before (of course) and someone had even made a crude prototype. They posted a picture of it. It was a flat mannequin, IIRC. made of fiberboard or seme such and probably hokked to the PC using the serial port. It didn't have full jointing obviously, but the person stated that it worked. The thread may still be here somewhere if you want to undertake the daunting task of trying to find it. A number of years ago, there was a gaming accessory for one of the consoles I think called perhaps a "Power Glove" that was a skeletal glove that allowed hand positioning input. I imagine that the basic technology exists to create a kind of body skeleton that would allow direct "Pose It Yourself" input which would be even more fun. I think it all probably boils down to economy of scale. Someone could cook up a homebrew model of something but getting the kinks out, getting a manufacturable design and being able to afford making enough of them to keep the price reasonable is the challenge. An of course, you have to sell enough of them. Figure out how much at least a thousand or so Poser users would definitely pay, factor in the possibility of a Poser 5 "it doesn't work on my machine, it's crap don't buy it" nightmare, etc.It's definitely not a project for the weak hearted or those with shallow pockets.

"Democracy is a pathetic belief in the collective wisdom of individual ignorance." - H. L. Mencken


operaguy ( ) posted Mon, 27 December 2004 at 12:34 AM

I have been tempted to plunge on Visual Marker a couple of times. I wrote to the author. How shall I put this politely....the product is somewhat undersupported!

It still could be valuable. Did you download the "tutorial video" for it? You have to actually "purchase" the video at his store for zero dollars to get it.

Basically, you take an existing video, up to two (or is it four?) views, and open EACH FRAME in an editor and physically indicate (by clicking with the mouse) the markers where they 'belong' on the vid. I think it could help if your actor actually wore clothing with markers...there would be no automatic capture, but when going thru the labor of editing each frame, at least you would have a target.

::::: Opera :::::


operaguy ( ) posted Mon, 27 December 2004 at 12:37 AM

Originalplaid, are you in Poser5? Colision detection! ::::: Opera :::::


kuroyume0161 ( ) posted Mon, 27 December 2004 at 2:29 AM

I've heard of that, operaguy. Similar to RealViz's ImageModeler 3D, but 4D of course. It's a means I quess, but sounds very, very, very, very tedious. Let's just say this: ImageModeler 3D is very, very tedious. ;) Okay, these are two different concepts. ImageModeler is taking 3+ photos and creating a 3D model which requires more correlated points than motion capture (per frame). But the frames will add up quickly. Although still tedious, you could just as easily take the videos into Poser and have your figure follow by posing at 'key' frames. This is the technique, but with only one video viewpoint, that I used for creating animated sign language poses. Since there was only one view, had to check the work from different angles constantly.

C makes it easy to shoot yourself in the foot. C++ makes it harder, but when you do, you blow your whole leg off.

 -- Bjarne Stroustrup

Contact Me | Kuroyume's DevelopmentZone


operaguy ( ) posted Mon, 27 December 2004 at 3:41 AM

Kuroyume >>Although still tedious, you could just as easily take the videos into Poser and have your figure follow by posing at 'key' frames. <<

Would it be fair to call your "import footage and pose against it" concept "rotoscoping?"

You've posted a very interesting idea, as a matter of fact. I am mostly interested in pure facial motion capture (with head movement, blinking, etc., not just lip-synch) and although I am pounding Mimic hard right now, it's not really doing it for me. I am not giving up on Mimic, mind you...and frankly the two in concert might be really smart. Let me explain what I mean.

Let's say you take a good straight-on video of your actor speaking. You might or you might not place colored dots on the face. One camera only. You can import that vid into Mimic. [Note: Mimic allows import of video, but it only processes the audio track; the visual is just for reference inside Mimic]

You could then set mimic to NOT invent gestures, only lips and mouth. There is a fine tutorial by Jake at Daz teaching how to train Mimic to recognize the phonemes using the cr2 file of your character and the audio of the actor! That gets you kinda sorta close. You export the Mimic file as a poze -- an animation pose. It's totaly easy and Poser-friendly.

You open up Poser and import the exact same video file. You select the face camera. Or perhaps another camera, but make sure it is solidly cemented like a rock on the character's face. You load the poze. If all is well, your Poser character's mouth will start and stop EXACTLY where it should for each sound. Since we told Mimic NOT to create gestures and head movement, the mouth should NOT move back and forth or up and down; just lip and mouth-muscle movement. I can tell you with certainty that Mimic does a REALLY good job of that "on cue" business.

Now, the movements of the mouth and lips might be pretty close to what is on the video. But to make it more exact you could rotoscope/keyframe the lips, teeth and mouth against the video, possibly. I don't know how you'd make the Poser figure's face "transparent" so you could see the video.

I wonder if you could make the video semi-transparent {I know how to do that} and position your Poser figure immediately behind the background. Is that even doable? If so, the vid would appear to play out right on the model's face.

For full head movement during speech, you could make ANOTHER Mimic file with gestures only, no mouth movement, export that as a pose and see if it might animate your characters head movements and eyeblinks without messing up the lip-sync. OR.....you could just go to the main camera and rotoscope/keyframe against the video, as you did.

Anyway, there are some ideas there.

::::: Opera :::::


operaguy ( ) posted Mon, 27 December 2004 at 4:15 AM

It looks like ImageModler costs $1380 including VAT. Do you have the program and have you worked with it extensively?

Do you see they have "MatchMover" software???

Here is the link. Strangely, there is no listing (or price) for it at their store, only the user manual.

I am currently checking out another brand of MatchMover software for video capture. Will report on that tomorrow. Matchmover claims to do for faces and bodies what ImageModler does for buildings, etc., create a 4D file out of straight video footage (requires dots on actor's faces.)

::::: Opera :::::


Dale B ( ) posted Mon, 27 December 2004 at 5:59 AM

Operaguy; Thanks for the link to Gypsy! That frame is...intriuging. One has to wonder about it's ruggedness, though. A good fall would either total the frame or kill the actor, unless there is some mechanism there that protects both... As for rotoscoping over a character's face; if you have P5 you can do it easily. It supports .avi files as textures on alpha planes. Just plop a plane in front of your character and run the avi on it (maybe with a script that puts a GUI slider on your desktop that alters the transparency of the plane in realtime, so you can set the level of visibility of the model, or do the 'reference on-model invisible, reference off-model only' check for accuracy).


davidhp ( ) posted Mon, 27 December 2004 at 8:51 AM

As an inventor, I noticed this thread and couldn't help commenting. The logic seems compelling but will need a bit of development. Take a simple artist's manniquin. Rigid wooden bones plus head connected to ball joints. If each joint is a ball then each ball might be a trackball, covered with dots as per the MS trackballs and a sensor in each end of the joint reports on its position on the ball as it is moved. Move the figure freely and the sensors report just like mice. The mouse driver technology exists, as does the driver software. What doesn't exist is the ability to collect data from (quick guess!) 30 'mice' at once on a single PC. All optical and no gears. One trivial problem is making the balls move freely enough to allow joints to be moved by hand but rigid enough so that the figure doesn't fall limply all over the place. Additional friction is needed per joint. Make a nice project for someone and trackballs are always on offer in big stores in the bargain bins since so few ppl use them... Bit big though (1 inch diam?) Optical tech is probably too heavily patent protected too but even so worth a try I'd have thought. Just an idea! Let me know if anyone wants to try it.


killlove ( ) posted Mon, 27 December 2004 at 8:55 AM

anh iu em ma`


originalplaid ( ) posted Mon, 27 December 2004 at 8:56 AM

the other problem is the ability to map data taken to a program, since davidhp type solution above would e different then conventional mocap. Would python scripting (adjusting more dials at once in poser?) work? I never really played with them since I didn't even realize mac had python scripting until last night! And the friction problem is something I already wondered about. I was thinking something like the joints on a gi joe figure but then realized the size required to insert the sensors would be huge. Let's recap shall we! what ideas do you guys think would work best? get a renderosity.com patent :) oh wait, open source that's right...


killlove ( ) posted Mon, 27 December 2004 at 9:01 AM

Attached Link: http://www.vnbooter.com

du ma may dap chet mem nha thang cho de!!!!


davidhp ( ) posted Mon, 27 December 2004 at 9:08 AM · edited Mon, 27 December 2004 at 9:12 AM

Sorry.

Thought it was obvious.

Since all you need is the lengths of the bones and the angles returned by the sensors, all the rest of the mocap positional data can be calculated with simple math.

It won't record anyhting like fluid motion but it would make stop motion anim MUCH easier.

Message edited on: 12/27/2004 09:10

Message edited on: 12/27/2004 09:12


andygraph ( ) posted Mon, 27 December 2004 at 1:04 PM

file_160803.jpg

Poser need a motion tracking advice, maybe with a python script would be open a way for developed a motion capture tool for Poser !!!! Just would be need work on it ... ;-) a example in attach


Kenmac ( ) posted Mon, 27 December 2004 at 1:20 PM

Attached Link: http://www.oas.co.jp/products/mariogear/index.html

I was checking out some websites the other day when I came across this Japanese one which has exactly what Originalplaid is looking for. It's called "Mariogear" and it's used for 3d Max and Filmbox. I'm not sure how much it costs as there is an inquiry page. You may have to use a Japanese to English translator for the site but the pictures more or less tell the story. Here's another link in addition to the main one that shows it being set up. http://www.oas.co.jp/products/mariogear/OpeVideo.html


bushi ( ) posted Mon, 27 December 2004 at 1:23 PM

Attached Link: Real World Control Thread

Been there, done that ... ;-)


kuroyume0161 ( ) posted Mon, 27 December 2004 at 1:48 PM

No wonder, Kenmac, with their almost fanatical devotion to anthropomorphic robots! :)

C makes it easy to shoot yourself in the foot. C++ makes it harder, but when you do, you blow your whole leg off.

 -- Bjarne Stroustrup

Contact Me | Kuroyume's DevelopmentZone


operaguy ( ) posted Mon, 27 December 2004 at 2:56 PM

Dale, thanks for that information. I have Poser5 and will follow that idea of putting the .avi on a plane in front of the model... ::::: Opera :::::


nomuse ( ) posted Mon, 27 December 2004 at 3:04 PM

Off the top of my head, this process is called "Go Motion" and was invented by Phil Tippet Studios. Or, at least, they are well-known for using a similar system. A friend of mine was working for them during "Starship Troopers" and talked about sticking rotation sensors into the joints of puppets of the various bugs. After that, an animator could play with the bug, waving the claws around in menacing ways, with real-time capture of the joint rotations. Obviously the translation of joint rotations on these specially-built, correctly proportioned armatures translated very smoothly into animation of the 3d model. And of course the usual caveats of mocap apply; the data is large and "noisy" and hard for the 3d animators to clean up. But the technology was simple enough to be hobbiest level. I think, given the right rotation sensors and an old wooden artist's mannequin, you could work one of these things up in a couple of months of hard work.


Privacy Notice

This site uses cookies to deliver the best experience. Our own cookies make user accounts and other features possible. Third-party cookies are used to display relevant ads and to analyze how Renderosity is used. By using our site, you acknowledge that you have read and understood our Terms of Service, including our Cookie Policy and our Privacy Policy.