Forum: Poser - OFFICIAL


Subject: any news from Nerd, what direction PP15?

MistyLaraCarrara opened this issue on Jun 03, 2015 · 355 posts


piersyf posted Thu, 11 June 2015 at 11:53 PM

My understanding of current GPU renderers is that they are limited in materials and functionality (no SSS for example) and cannot render scenes larger than their internal memory. It is also not surprising that most prefer nVidia cards as nVidea either owns the renderers or subsidises them in some way. IRAY is a case in point... isn't it owned by nVidea? I think (and I'm sure I'll be corrected if I'm wrong) that Blender cycles can only do SSS with CPU rendering and IRAY is limited to card memory size.

It isn't that nVidea is better, it's that AMD missed the boat in supporting unbiased renderers. My current card is a Sapphire HD7970. It's 3 years old, can't really buy them in Australia any more (all show as 'end of life' in the store ads). It has 3Gb DDR5, over 2000 shader cores and a memory bus of 384Mb. If I was to upgrade, the current most highly recommended card  of either make is the GTX970. 4Gb (although only 3.5Gb is DDR5), 1664 shader cores (CUDA) and a memory bus of 256Mb. Hardly looks any better, does it? But the internal architecture is better, and nVidia put the dollars into supporting renderers, so is likely to be my next card ($500 dollar range here, so wipes out the R9 200's from AMD)).

The thing is, I don't do animations but I do do "comics", or graphic stories. 80 to 120 images per story. I can't afford to wait a day for each pic so I can use SSS, and if Poser actually implemented the rest of Firefly's capabilities it wouldn't be that bad as a biased render engine. As it is I need to use other programs to do big scenes (like Carrara, because it has instancing and global illumination) or I can use Blender, and when Cycles can fully implement Open CL it won't matter which card you have. Again, last I read over on the Blender Wiki, ATI cards can run cycles better than CUDA cards in some circumstances. It's confusing enough for me over which is better that my card decision is mostly based on gaming than on 3D renders.

Frankly, I don't think the capacity is there yet. I have heard that card engineers are considering something like layered memory of differing DDR's so they can put 12Gb or more into a standard card. I'm watching, but not convinced to jump yet. It's a big financial commitment and a big change in process/workflow.