RorrKonn opened this issue on Jun 26, 2014 ยท 35 posts
stewer posted Wed, 02 July 2014 at 2:17 AM
Quote - Stewer - I have always assumed that there was a direct relation between the quantity and size of texture maps used in a render and the RAM usage - ie. a 2048 square map would require less RAM at render than a 4096 map - but you're saying that's not the case if I'm reading you right.
At render time, FireFly gets a limit of a max amount of RAM it will use for textures. The limit will depend on the number of rendering threads and, in the case of a 64 bit instance, the amount of physical memory in the machine.
When converting the textures, Poser will read the textures one at a time in full resolution, so resist the temptation to compress everything into one 100k by 100k texure :) The OpenGL preview cannot use the texture cache, so there you are still limited by VRAM, preview texture size and the number of textures in the scene.
Quote - BTW, I re-rendered (had the test scene saved, so just started up and plugged in the exr map) - 5 sequential renders timed at 6m32s, 6m24s, 6m17s, 6m11s, and 6m12s.
I took another look at the screen shot, and the difference in displacement value is the culprit. Poser automatically estimates the displacement bounds from it, and the large displacement bounds make the scene using OpenEXR use more render time and memory. If you want to scale the displacement without affecting the displacement bounds, use a math mutliply node.
BTW, I recommend using either displacement or normal/bump and not applying a bump map on top of displacement.