Forum Coordinators: RedPhantom
Poser - OFFICIAL F.A.Q (Last Updated: 2024 Dec 23 1:20 pm)
One answer is to do the sounds the other way around.
My SoundStager script would let you load up a single punch sound
and apply it to every contact of the boxing glove and the face,
for instance.
http://market.renderosity.com/mod/bcs/index.php?ViewProduct=57986
(Shameless plug)
Edit: I might add, SoundStager is also meant to serve as an overall
'track controller' and 'track mixer' for music, Mimic, and contact effects.
=====================
If you're going to do it the other way around, you just need to
read the waveform, which is possible with some practice.
Or use a wave editor to zoom in on the time scale and write down
the exact times where the punch needs to happen.
The hardest part of that is the anticipating, because even after
you know where the contact happens, you need to "walk backwards"
on the Poser timeline to find where the punching move starts.
My python page
My ShareCG freebies
I think most of the time sound is done before animation. In lipsyncing, generally the sound occurs 3 or 4 frames before the action.
In the a complex scene like you mentioned, you may want to import each soundtrack on by one and sync its action, "layering" actions as you add each new sound. So do character 1, layer on character 2, layer on hits and so forth. It's one of those "slower is faster" type things.
ockham:
Does you script overcome poser applying the audio to teh 1st frame of the timeline...essentially overwriting the previous sound files?
I just completed a series of animations where it was tricky to have mroe then one character talking in a scene...had to apply the same talkdesigner file to each and then delete the non-speaking characters keyframes when they wern't suppose to be talking.
♠Ω Poser eZine
Ω♠
♠Ω Poser Free Stuff
Ω♠
♠Ω My Homepage Ω♠
www.3rddimensiongraphics.net
From what I've parcelled from the Poser scene file, there is only ONE sound file that can be added and it will start playing at frame 1 (because the sound file is just referenced without any parameters besides). This is not an exhaustive study but have not seen any way to circumvent this commandment. Maybe Poser Python allows more flexibility here?
Yes, one should definitely animate around the audio. Though, realize that the big studio 3D CG movies don't work this way - they animate first and then have the actors do the audio. There must be a reason for this successful approach...
C makes it easy to shoot yourself in the
foot. C++ makes it harder, but when you do, you blow your whole leg
off.
-- Bjarne
Stroustrup
Contact Me | Kuroyume's DevelopmentZone
Hope this question isn't too OT for this thread, but are there any tricks for animation (using MIMIC in Poser 5) a character singing words instead of just speaking them? For example, should the words in the MIMIC text file be typed more phonetically to draw out the words action on the mouth?
Intel Core I7 3090K 4.5 GhZ (overclocked) 12-meg cache CPU, 32 Gig DDR3 memory, GeoForce GTX680 2gig 256 Bit PCI Express 3.0 graphic card, 3 Western Difgital 7200 rpm 1 Tb SATA Hard Drives
Terry Mitchell... i've tried singing but it doesn't work.. al least that's what i feel. And that's even if you type it more phonetically. When you elongate vowels,,, that is not captured by mimic. I think it may open the mouth but then close it again. Back to the topic, thanks everyone. When syncing animation to sound, The characters are not a problem.. Usually i can just apply the same mimic pose to all and then delete mouth animation frames in the space of time when a character is not talking. The difficult part, is the sound effects. It would help if i could repeatedly hear a frame's sound bit so that the sound can register in my mind. When cliking the ADVANCE ONE frame button, the sound is too short and too fast.
Depends on what kind of a sound. If it's a soundtrack, then you have to figure out where on the timeline certain beat or sound even occurs, then adjust your animation to fit.
When I'm doing various visualization presentations at work, sound effects I like to add in post production, where I mix video with various soundtracks. Either in Adobe Premiere, or Sony's Vegas Video. They let you move and slide sound bytes to match animation, and allow you to mix multiple audio and sound tracks. Especially Vegas Video.
Hi, my namez: "NO, Bad Kitteh, NO!" Whaz
yurs?
BadKittehCo
Store BadKittehCo Freebies
and product support
I play back the animation in the blocks view, so it goes at real time, keep your clicker over the pause button, with practice it can be done :)
♠Ω Poser eZine
Ω♠
♠Ω Poser Free Stuff
Ω♠
♠Ω My Homepage Ω♠
www.3rddimensiongraphics.net
My workflow on this seems to be doing the audio compiling in Audition (which will let you load video to synch with), musical scoring in Sonic Fire Pro, and final synching in Premiere Pro. You get much finer control of timing in Audition I've found, and just being able to assign music to one track, each vocal to a track, and foley to a track makes it one hell of a lot easier to debug a timing error.
Quote -
Yes, one should definitely animate around the audio. Though, realize that the big studio 3D CG movies don't work this way - they animate first and then have the actors do the audio. There must be a reason for this successful approach...
I've never heard of that before... one would think if that was occurring it would be due to waiting for "Big Name Talent" to sign on, however the initial animating would have certainly been done using placeholder audio tracks (of non big name talent) to sync up to... after the fact recording, animating may need clean up and changes in the animation area due to subtleties of the big name's voice and personality that may not have made it into the initial animating creating more realism and believability of the character.
Gerard
The GR00VY GH0ULIE!
You are pure, you are snow
We are the useless sluts that they mould
Rock n roll is our epiphany
Culture, alienation, boredom and despair
It's very possible that the animation is done to 'placeholder audio'. The full process isn't discussed much except maybe in more exclusive circles one supposes. Obviously, the entire cadence (audio and video) has to be considered in the creation of the animation. But I can't believe that one is more dependent upon the other to such an extent as being portrayed here - as in, 'Make the animation fit this audio or else'. That would be suicidal. ;)
If you have the audio you want and need to create the animation around it, be prepared to do a lot of tweaking to gain synchronization. With properly done animation, the alternative of fitting the audio would appear to be less work. The best bet with Poser would be to be prepared to select columnar sets of keyframes and slide them backward or forward in time to achieve this after doing the general animation.
C makes it easy to shoot yourself in the
foot. C++ makes it harder, but when you do, you blow your whole leg
off.
-- Bjarne
Stroustrup
Contact Me | Kuroyume's DevelopmentZone
Quote - Hope this question isn't too OT for this thread, but are there any tricks for animation (using MIMIC in Poser 5) a character singing words instead of just speaking them? For example, should the words in the MIMIC text file be typed more phonetically to draw out the words action on the mouth?
What I have done using mimic is this:
Get the words to the song and put them in a text document. Then split the lyrics in to parts or verses.
(hard part)
:scared:
Get head phones and a mike, make sure they are not a combo. Play the part of the song you want to record and sing into the mic (I always make sure wife is not around) and record a wave file.
Open the lyrics text and import them into mimic and open the the wav file you just created.
Mimic should now be able to make the file for poseablity.
Poserverse The New Home
for NYGUY's Freebies
*"I've never heard of that before... "
*Actually, I've heard JIm Carey, Robin Wilams(sp) and a couple others talk about that as the way they do it.
Imagine a fist-a-cuffs punch. (as an example)
Punch,,,, thud
There is a natural timeing to the sequance that would sound WAY off if it was to fast or to slow.
So, to have a working sound script it would be far far easier to match then to try and match/stretch/compress the sound to fit an allready existing animation.
Nyguy, what I will have (eventually) is an isolated version of the song soundtrack recorded by a singer at the same time the actual music for the song and soundtrack was being recorded (which I will overlay in the final animation post production edit in Adobe Premiere). So I already have the song being sung. I just needed to know if I use this .wav file in MIMIC and simply type out the words being sung as is whether or not Mimic would apply the pose OK, or whether I'd have to type the text more phonetically.
Intel Core I7 3090K 4.5 GhZ (overclocked) 12-meg cache CPU, 32 Gig DDR3 memory, GeoForce GTX680 2gig 256 Bit PCI Express 3.0 graphic card, 3 Western Difgital 7200 rpm 1 Tb SATA Hard Drives
Experiment. Sometimes Mimic can produce correct looking mouth motions; other times they appear 'weak' for the sound being made. Things get closer if you add the typed script of the speech, and tweaking the spelling per the Mimic rules can get you the results you want. Don't be afraid to do some facial animation in Mimic, either. Adding the correct facial expression will really make a difference in how the scene views.
Just to chime in with a quick note here on one of the methods we use in our studio...
Some audio software will allow you to put markers in your audio file. I think quite a few allow that. Anyway, we actually use a few different audio applications, but all of them have some of functionality that allows us to place these markers in the audio file and then export them as a txt file. One can open the text file and see the time of each marker, the marker name and any notes you may have entered there. The time is displayed based on the timecode settings in the software, so you can work with either actual SMTP timecode or absolute frames. Absolute frames is what I prefer for most work. What this does, then, is allow one to listen to the audio, dropping markers and placing notes in them for whatever animation needs to happen at that point, then save all that out as txt file which then acts as, basically, an old fashioned "dope-sheet" or event list. Even when using automated lipsync applications, I still find this to be a very handy bit of setup work as it allows you to really plan out the animation and think through things in the performance that automation just can't nail down. When using this technique for straight forward actions, it allows one to have an immediate, textual reference for the timing of your movements, so even if for some reason you don't want the audio loaded while you're animating, you can still reference your event list and be quite accurate with your work.
-Les
This site uses cookies to deliver the best experience. Our own cookies make user accounts and other features possible. Third-party cookies are used to display relevant ads and to analyze how Renderosity is used. By using our site, you acknowledge that you have read and understood our Terms of Service, including our Cookie Policy and our Privacy Policy.
I'm asking this cause it is hard to pinpoint the exact point in the timeline when a sound starts or when the high point of the sound occurs. for example a fight scene. Well , just about any type of scene that has sounds. if i go frame by frame sometimes hearing the sound is hard ccause it's really fast and it's just a short burst of sound i cant quite recognize. as for why i do sound before the animation. That's just my style, i find it easier. I make the total sound track with voice acting, music and sound effects all in one file. to do the Mimic part i export a version with only the voice acting tracks ON. But the final import to poser is with all tracks on.