Forum: Carrara


Subject: Render Nodes

HAWK999 opened this issue on Jan 20, 2006 ยท 41 posts


operaguy posted Tue, 24 January 2006 at 11:22 PM

'sending a frame to a slower machine" is not an important problem IMO. So what if a slow machine gets a full frame? The others just keep going, the slow one goes at its own pace. There very worst thing that could possibly happen is: at the end of a long animation run on many machines, say 24 hours, the slow machine coincidently gets a frame RIGHT at the very end, and it has to finish, while the others are done because there are no more frames; you might loose 1/2 hour in the extremem. In the end, the various nodes would have each rendered, say 100-120 frames each, while the slow one might have contributed 45. Meanwhile, all computers rendered at their own pace, with no downtime, and with larger tiles on each "swallow." I am presuming that on a one-cpu render, you'd want the largest possible tile, right? That's the way it is in Poser, you attempt big tiles (called buckets) but turn down the setting if it can't be digested. There is overhead involved in attacking a tile of a given size, and you want to incurr that overhead as few times as possible per frame. The Carrara renderer might not have that same reality. Let me ask this about the Evoia paradigm. What happens if you are rendering an animation for HD or digital film, and it's a two-hour budget per frame for an average node by itself. During network render, 10 machines have at it, all on the same frame. However, during the run, two of the computers crash or freeze. Is there intelligence built into the network render system to detect the partially rendered tile and send it to another machine? If anyone can shed light on that, please respond. Thank you ::::: Opera ::::