Forum: Poser - OFFICIAL


Subject: Codec ?

trades2cash opened this issue on Mar 31, 2008 ยท 7 posts


Dale B posted Tue, 01 April 2008 at 5:55 AM

My patented screed on animating to codec..... (re deux) This is the result of a lot of experimentation, and reflects more than a few months of learning the hard way. One of the seemingly neatest things for animators is the ability to have your preferred 3D application render with a codec, so you just go away and have movie file of some sort waiting for you when you get back. But if you've noticed, a lot of animation troubles seem to get back to the fact that the image output was codeced as it was rendered. One of the most common being pixellation or mosaic artifacting as the movie file is played. The absolutely best method to deal with this (and a host of other issues) is to -not- attempt to create a codec compressed animation straight from a rendering program. Render your animation as uncompressed frames (bmp's, tiffs or targa's. png's....and more and more apps have .psd output, so you get the option of output in photoshop layers, which can be a godsend for postwork. Or if you have After Effects of Combustion, there is rla/rpf format...) and assemble them in a video editor. A quick search will reveal the existance of several freeware apps that work well; if you go the purchased route, the less expensive end is the Magix line of software or the Quicktime Pro application. If you want more power, either Adobe Premiere or Final Cut, After Effects or Combustion. And there are others. While this does add a stage or two to your rendering pipeline, the benefits far outweigh the cost. 1). by doing frame rendering, if something causes the render to abort, in most apps you can simply advance the frame counter to the frame just past the last good render and start from there. If you are doing codec compression as you render, and something bombs, you have to start from scratch. 2). You have uncompressed frames that you can take into a graphics program and perform work on. Say you render a scene and find that you miscalculated your amibient light levels, and things look a bit washed out (or too dark. Or there was a color level glitch. Or you get the idea....). You could import the first frame into Photoshop (or your app of choice; I still use PaintShop Pro), fiddle with it until it looks right, then batch process the rest of the frames, instead of wasting the time rendering the scene over again. 3). You have a raw source to save your butt with. Once you have those frames and are satisfied, you just burn them to disc or back them up to a safety drive. If something happens to that scene later in postwork, you have the raws to reload, instead of having to render it all over again. 4). Video editing apps tend to give you far more control over what a codec is doing. The halfway decent ones give you the options on pixel shape, screen ratio, compression rates, and a bunch of other things. And this lets you experiment with settings until you are satisfied. Don't like how one compilation turned out? Just do another one with a different codec and screen ratio. Find that your output has mosaic artifacts? Change the compression rate slightly. You can do anything regarding the final output, and still have your raw frames to start all over again if you goof. 5). It is far, far easier to add sound to uncompressed frames than it is to add them to compressed video. A lot of codecs assume that you are wanting 'television' kinds of video output, so they take some liberties to get that. One of them is 'tweening' which is filling the gaps in a video stream by averaging between two compressed frames. If you are trying to synchronize a sound effect to something, it is possible that the 'frame' you want as your sound start doesn't really exist, except mathematically. So you can get a slight timing error. It might not be noticeable....or it might be just enough to blow the whole scene ( ready example is to open Poser, import a character, set a pose at frame 0001, then go to frame 0030 and set another one, then run the animation. If you look at the animation pallete, there should only be keyframes highlighted at the first and last frames; there are no 'real' keyframes in fields 0002-0029, but since the figure has motion through that timespan, all the elements that make up keyframes are there. In animation that is called interpolation, but it is also an example of tweening, as the computer takes the values between the two keyframes and averages it 'between' all the others. If you tried to add sound in that 'tweened' timespan, you would be utterly dependant on your ability to fiddle with things until it sounded and looked right, and this is an aquired skill). 6). Post effects work is far easier with frames. Particularly if your applications support .psd layer output, or one of the embedded alpha channel formats. You can get your alpha mask as a layer (the white on black 'light' layer that can be used to 'cut out' sections where things exist in another animation, so you can composite the two together) for example. The whole problem with codecs in general is that render applications are memory hogs. But so are codecs. And when one gets into a fight over the memory pool with the other, one or both stutter, pause, and tend to get a bit unstable. And codecs assume that they are going to have all the resources they need for the compression part of the work. And pausing a compression, or even slowing it down, can create artifacts that are impossible to get out.