Forum: Carrara


Subject: Cool trick to cut render times in half - why does it work?

jonstark opened this issue on Sep 06, 2010 · 16 posts


Kixum posted Wed, 08 September 2010 at 11:03 PM

Well I can't sit out of this thread anymore.

I have one big issue with render times and that's using raytraced depth of field.  Now if you combine ray traced depth of field with a hard hitting GI AND you have transparency objects in the scene (say like a glass marble or a clear plastic writing pen and why would I mention that?), the render times go baliistic (like days dudes and dudettes).

I will try this in combo with an image that has a big transparent object and ray traced DOF to see if it works.

What I really don't get here is why in the heck does it really matter?  When you render, all that stuff gets crunched at the beginning.  I always assumed that it didn't fuss with it anymore after that but apparently I was wrong.

And while I'm in rant mode,

When network rendering and GI came out (I can't remember what version), I did an article in Renderosity magazine to review the new version (this version of C had literally only been out about 24 hours).  I had TERRIBLE problems with network rendering using GI.  In fact, it was a problem that persisted and I haven't checked it for a while, it was such a big problem that I've actually given up network rendering.  I almost exclusively render using GI and if network rendering can't support GI, then you don't use it (right?).

The problem was that different machines computed slightly different irradiance maps induced by numerics in the compiled code.  The supposed story was that C generated different results on different machines due to roundoff issues (or some such).  So each machine that contributed a little block to the final image had a different overall hue and the result looked like you had overlaid some weird checkerboard on the final image (much badness).

When I pursued this with Eovia, they told me it was this numerics problem.  Well my gut has alwyas wondered why not follow these steps.

All networked machines compute the irradiance map.
When one of them gets done first (the speediest), it tells all the others to stop and then copies that map to all other machines to use.
Then render.

I would think this process would then eliminate numeric issues because all of the rendering nodes would be referencing the same numerical result.

In the end, I have no idea of how C really works and that idea may just be stupid because C just doesn't work that way or something.  Maybe the maps are too big?  I have no idea.

It just seemed like it was a problem that could have been managed.

Well, maybe this didn't belong in this thread but it's an irradience topic so I mentioned it.  It sort of combs my hair backwards because this trick of halving rendering times shouldn't make a difference.

It's very cool that such a trick can be done but sad that C isn't setup to do it the cool way in the first place.

Later,

-Kix