Methastopholis opened this issue on Oct 22, 2004 ยท 21 posts
kuroyume0161 posted Sat, 23 October 2004 at 12:02 PM
The game engine and hardware rendering capabilities (in this case, CPU and GPU) definitely make a difference, but the bottom line is speed - speed based upon the number of polygons and points. Look at any 3D graphics card and the most highlighted features are texels/sec (texturing polygons) and vertices/sec (polygon points). Using obviously rough figures, let's say there are 40,000 'actors' in the scene, each having only 500 polygons each. That's 20,000,000 polygons to be processed, not including the scenery and props, per frame. Doesn't matter how many are quickly removed from the rendering pipeline, each polygon is still needed to be checked at least once per frame. I know for a fact that my machine (dual Xeon 2.66GHz, 4GB memory, GeForceFX 5900 Ultra) and Cinema4D/Poser/Vue/LightWave/Shade would choke on polygon counts like this. We're talking the difference between slow CPU renderers that give high quality at slow speeds and fast GPU renderers that give okay quality at high speeds. 3D Animation software (like Maya, Cinema4D, LightWave, Houdini, SoftImage, 3DSMax, Shade, Vue, etc.) are geared towards the first type. 3D Game Engines using OpenGL or DirectX are geared for the latter. As ynsaen points out, with the given total episodal budget, this is probably the best they could do within that budget considering other restraints.
C makes it easy to shoot yourself in the
foot. C++ makes it harder, but when you do, you blow your whole leg
off.
-- Bjarne
Stroustrup
Contact Me | Kuroyume's DevelopmentZone