MikeJ opened this issue on Jan 30, 2009 · 13 posts
MikeJ posted Sat, 31 January 2009 at 7:00 AM
Quote -
As a graphics framework OpenGL is superior in design to DirectX.
Problem is, consumer PC graphics cards come with highly optimized DirectX drivers, while OpenGL drivers are sloppier. Which results in the perception that DirectX in itself would be better.
Yeah, I see what you're saying, and I've read alot of arguments from both sides. I think that's what I meant though, that DirectDraw is more supported via drivers for video cards, which is why it seems to work better. In some cases, but not all.
I use Deep Exploration CAD Edition for converting 3D files and batch converting image files (version 5.6), which has an option for DX (9.0c) or OpenGL. It also displays frame rate as well as having a benchmark tool. Some models do better in DX while others do better in OGL. If I'm having slow performance in one mode, chances are it will improve if I switch. Seems to depend on the amount of polygons, texture size, transparencies and so on.
I also use Deep Paint 3D to paint on models, as well as modo 302 for painting and sculpting. Deep Paint 3D uses DX, while modo uses OpenGL. While modo's OGL performance overall is VERY good, Deep Paint 3D blows it away, even with the same models with the same textures.
But then compare Poser to LightWave in OpenGL performance. There is no comparison, Poser is a whole helluva lot faster. Although LW overall implements OGL better for most things such as textures, transparencies and lights (plus has GLSL support), Poser is much quicker when moving, tumbling and moving the camera around.
Using the same video card, same driver set, so that seems to imply the differences between the two are not because of the drivers, but due to how the program is written to deal with it all.
So I probably shouldn't say DX is somehow "better" than OGL (although alot of people do say so). It's all in how the app is written to deal with it.
I know that a GPU is far more powerful than a CPU for OGL or DirectDraw, but if you watch the performance graphs and temperatures for your CPU and video card while manipulating high-demand 3D objects in OGL, you see your CPU being hit pretty hard (but only one core), while the GPU barely flinches.
So, the programmers ARE relying more on the CPU, it seems. Why is that?