Forum Coordinators: RedPhantom
Poser - OFFICIAL F.A.Q (Last Updated: 2024 Nov 10 10:34 am)
This is a my idea for create a software of mocap for face expression and lipsinc from footage (quicktime or avi) ! VISUALMARKERFACE or MOCAPFACE etc etc for MAC and PC but this is a idea and no a software .. i search a collaborator or collaborators for create togheter this software .. freeware, shareware, ? i dont know now ;-) but then UVmapper it would be the next cool software for poser Andygraph
will the shareware version come with a free a video camera and a PC/MAC capture card that can capture the footage??? By the time they have$$Bought all that they could have gotten mimic from daz and learned how to keyframe secondary facial expressions to go with the auto lipsynch generated by mimic
the way cool of this idea is to would can take the expression face in "automatic mode" (with tracking 2D) from dates about markers position (face markers pose) from the library to you will can developed then ... Visualmarkerface would work with only map of markers in link with your personal morphtargets and labial morphs ! lipsinc dont take the expression from the actor but create only lipsinc from audio .. visualmarkerface would can make it take the expression face and labial from a 640x480 quicktime or avi source ! the position setup of the markers would be use for another sequences .. developed so your personal library expression face ;-) andygraph
Although it's beyond my programming capability, I think this is an outstanding and very workable idea. You wouldn't need any expensive equipment, just a simple webcam would do nicely. You could record the dialogue (audio) while capturing the facial expression animation, and then use the recorded audio with mimic to create the lipsynch. This would result in perfectly synchronized facial animation. This two step process would be a huge time saver! Similar programs (that recognize data taken from a live video feed) already exist that, for example, allow you to juggle virtual objects and play virtual instruments. I don't think it would be a huge step to record the locations of reflective dots placed on one's face and translate them into varying degrees of morph dial settings. Andygraph, I'm all for this idea of yours.. I hope you find someone with the programming prowess to pull it off! Best of luck!! -=JFStan=-
Um... Just to let you know, there IS such a program that works and is readily available for high-end apps. Look up Face Station (or some such variation of it... Facestation, etc.) We have it at work. i played with it a bit when it first came out. Does all that you show above, plus it has a feature like the Poser 5 Face Rooms, but implemented a lot better. - Ray
It seems this idea has merit.. Check out this site: http://www.famous3d.com/animation/Products/proFACEvideo.html This software obviously isn't designed for Poser, plus it's very expensive, but it's the same idea. The Poser version could be much simpler, since the expression morphs already exist within the characters. It would simply measure the movement of the dots and apply a certain value to the morph dial(s) based on the distance (or direction) moved. It seems that head movement is also controlled with this software by using "stabilizing markers". This may add a degree of difficulty in programming such an application for Poser. The user might just have to hold his/her head still during the capture process. So, any developers out there want to tackle this?? :) -=JFStan=-
Visualmarkerface is a automatic tracking 2D software and it take the position markers from a 2D space (x/y coordinate) don't is need upload a mesh for work with it but is need only one footage ... ;-) to work in C++ about 3D markers maybe is more difficult .. but to work with 2D markers is more simple i believe .. Andygraph
andygraph, great idea, unfortunately, to avoid complications with communicating with poser, thhis would need to be a poser plugin to work right. and CL is very protective of the data ( like global variable names they used ) needed to ensure it functioning right. it may be do-able in pythonscript, but then you are requiring pro pack or poser 5. a poser plugin would enhance all versions that the data was available to be able to port it to. ( and I doubt that CL is selling a SDK for anything prior to poser 5 any more )
I think it is a nice idea, but not practical for me. I would have to agree with Wolf359. It would really only take a few minutes to keyframe secondary facial expressions to go with the auto lipsynch generated by mimic, but with the the system you are suggesting it would take at least an hour for me just to set up the animation. Then after I created the animation I would still have to line up the audio track with the animation so that they would match perfectly. Too much work for me, but it would be a great idea if Mimic wasn't around.
I really do think it was a wonderful idea for people who do not want to buy Mimic.
Romanboy, this program would be a companion to Mimic.. And since you could record the audio while capturing the expression and then use that with Mimic (as I mentioned in an earlier post), the synchronization of expression, speech and the audio track would be perfect and effortless. I really do believe this is an excellent idea. Andygraph, perhaps we should try to find programmers/developers in other forums who would like to take on the challenge.. What do you think? -=JFStan=-
This site uses cookies to deliver the best experience. Our own cookies make user accounts and other features possible. Third-party cookies are used to display relevant ads and to analyze how Renderosity is used. By using our site, you acknowledge that you have read and understood our Terms of Service, including our Cookie Policy and our Privacy Policy.