ockham opened this issue on Mar 24, 2001 ยท 11 posts
ockham posted Sat, 24 March 2001 at 5:57 PM
Here's a primitive version, which actually works pretty well "from a distance". Read the comments in the PY file to see the assumptions and requirements. Essentially, you import your sound file, lay out the length of animation frames to be >= the sound file length, and then run the PY script to move the lips. Any other movements, whether manual or scripted, can be added on after that. # ---------------------------------------# # ---------------------------------------# # Lipsync (1st try) --3/24/01 ---- by ockham # [ occam24@gateway.net ]-# #----------------------------------------# # This is an inflexible 1st approximation of a # lipsyncing script. # It assumes you've set things up in a certain # way: # 1. The sound must be already imported # into the PZ3. # 2. The animation should be set up for 10 FPS, # with a length in # seconds at least as long as the sound file. # 3. The figure speaking is "Figure 1", and has # the usual head parameters. # 4. Sound must be mono, 16-bit, 11025 # samples/sec. # 5. Sound amplitude is not # automatically normalized; # if the movements are too large, # use a WAV editor # to cut the overall amplitude of your sound. # ------------------------------------# import sys import wave import poser import Numeric import fft # -----------------------------------------# scene = poser.Scene() figure = scene.Figure("Figure 1") # Find the sound. try: Filename = scene.Sound() except: print("No sound file present") # Open the sound file fp = wave.open(Filename, 'r') # Figure how much of the wave file # goes with one frame of # the animation. PPS = fp.getframerate() TotalSamps = fp.getnframes() FPS = scene.FramesPerSecond() PPF = PPS / FPS #This gives us the 'jumps' in sound file for each animation frame # ---------------------------------# # -- Main loop -------------------# # ---------------------------------# for i in range(scene.NumFrames()): # Set file position corresponding # to this animation frame try: fp.setpos(i * PPF) except: x=1 try: buf = fp.readframes(1024) except: print("readframes fails") SoundLen = len(buf) if SoundLen < 1024: exit else: #convert buf to array of float floatbuf = Numeric.array(buf,"Float") #do fft on buf fft.real_fft(floatbuf,1024) #floatbuf.astype('d') # Now use that number to set the Open Lips. scene.SetFrame(i) joint = figure.Actor("Head") # The only part we need, so set it once. # Using 1024 for size of FFT, we end up # with 512 spectral lines. # The sample rate is 11025, so Nyquist # is about 5.5KHz. # Very loosely, then, each line is 10 Hz. # Glottal should # be centered around 150 Hz, or the 15th # line, and should # control the Open Lips parameter. alterParm = joint.Parameter("OpenLips") val = abs(floatbuf[15]) * 0.03 alterParm.SetValue(val) # Try emphasizing the /o/-shape for F1 # around 600 Hz. alterParm = joint.Parameter("Mouth O") val = abs(floatbuf[50]) * 0.005 alterParm.SetValue(val) # Try emphasizing the /i/-shape for # F2 around 2KHz; alterParm = joint.Parameter("Smile") val = abs(floatbuf[200]) * 0.004 alterParm.SetValue(val) alterParm = joint.Parameter("Frown") val = abs(floatbuf[180]) * 0.006 alterParm.SetValue(val) # -------------------------------------------# fp.close() # Done with sound file # --------------------------------------#