Thu, Dec 12, 2:42 PM CST

Renderosity Forums / Poser - OFFICIAL



Welcome to the Poser - OFFICIAL Forum

Forum Coordinators: RedPhantom

Poser - OFFICIAL F.A.Q (Last Updated: 2024 Dec 12 2:01 pm)



Subject: graphics card displays and texture resolution


MikeJ ( ) posted Fri, 30 January 2009 at 9:18 AM · edited Thu, 12 December 2024 at 2:40 PM

Someone please correct me if I'm wrong here...

Seems to me that all video cards display OpenGL accelerated textures in powers of two. That is, 2,4,8,16,32,64,128,256,1024,2048,4096, up to 8192.
If image files are anything in between those resolutions, the video card has to internally resize the image to display it. Assuming I'm right, that is.
So, your GPU is not only calculating the rotating and moving and zooming in/out on your model, it also has to spend some processing time to resize those images.
Is this not right? I was always told that for textures, to stick to powers of two, for this reason.
If this is right, why are the vast majority of Poser commercial textures 1000x1000, 3000x3000, or any other number of off-sizes?

Is the internal resizing negligible to GPU processing power? Does it do it only once for each image file and then keep it that way, or have to do it every time a model or camera is moved? Will it resize to whatever the next closest power of 2 size is, either up or down, or does it always go down or up?



Fazzel ( ) posted Fri, 30 January 2009 at 10:02 AM

Maybe they don't know any better and just like even numbers?
You could try resizing a texture to 1024x1024,  2048x2048
or 4096x4096 and time it to see if you notice any speed
improvement. 



MikeJ ( ) posted Fri, 30 January 2009 at 10:48 AM · edited Fri, 30 January 2009 at 10:52 AM

I can't tell a difference with one texture or even a few. What I'm more concerned about is scenes that have several dozen of them, and I don't have the time to manually resize a whole mess of textures just for an experiment.
I'm more interested in the workings of it anyway, and I can't seem to find an answer as to the technicalities of how a graphics card deals with textures on a 3D object. I'm hoping someone can either confirm or refute what I asked.

But anyway, resizing textures shouldn't be my job, if I buy a texture pack, not to mention that resizing a .jpg texture and resaving it as such would result in a slight loss of quality, while saving it as bmp or png would result in any mat-pose files not finding the textures.

Besides, if I'm right about this, even if the additional processing power is minimal, I would think the people at DAZ would know about it, and know better. It might be minimal and negligible with a few textures, but what about with a hundred?



prixat ( ) posted Fri, 30 January 2009 at 11:13 AM

have a google for 'mipmap' :biggrin:

regards
prixat


nightfall ( ) posted Fri, 30 January 2009 at 11:44 AM · edited Fri, 30 January 2009 at 11:47 AM

Quote - Seems to me that all video cards display OpenGL accelerated textures in powers of two. That is, 2,4,8,16,32,64,128,256,1024,2048,4096, up to 8192.
If image files are anything in between those resolutions, the video card has to internally resize the image to display it. Assuming I'm right, that is.

The optimization is for memory usage and not speed.
A texture will occupy the same amount of memory as the next higher power of two size.
There is no resizing of images and thus no difference in speed. However more textures can fit inside the video card's memory if they are in power of two.


MikeJ ( ) posted Fri, 30 January 2009 at 11:55 AM

Thank you, nightfall. :-)



pakled ( ) posted Fri, 30 January 2009 at 12:18 PM

I think most video cards nowadays have processor of their own...not that that's important...;)

I wish I'd said that.. The Staircase Wit

anahl nathrak uth vas betude doth yel dyenvey..;)


MikeJ ( ) posted Fri, 30 January 2009 at 12:30 PM · edited Fri, 30 January 2009 at 12:32 PM

Quote - I think most video cards nowadays have processor of their own...not that that's important...;)

Well yeah, of course they do. And yes, it's very important. ;-)
Although I have yet to see an OpenGL program that uses the GPU to its fullest. Most of them rely heavily on the CPU for OpenGL, although believe it or not, Poser does alot better with OpenGL than any of the other apps I've seen (except where it comes to OGL lights, where it really sucks). DAZ Studio may be an exception, far as speed goes, but I can't really remember.
DirectX would be the way to go though, although the Mac users wouldn't be too happy about that...



svdl ( ) posted Fri, 30 January 2009 at 9:47 PM

As a graphics framework OpenGL is superior in design to DirectX.
Problem is, consumer PC graphics cards come with highly optimized DirectX drivers, while OpenGL drivers are sloppier. Which results in the perception that DirectX in itself would be better.

Professional graphics cards (think nVidia Quadro or ATI FireGL) come with OpenGL optimized drivers, and often with specialized drivers for the high end 3D packages (Max, Maya). On professional graphics cards OpenGL vastly outperforms DirectX.
The funny part is that those professional graphics cards use exactly the same hardware as the consumer cards. It used to be possible to tweak an nVidia 6800 based graphics card (costs around $150 in those days) into a Quadro FX 4000 ($4000) using only a bit of software. nVidia put a stop to that by adding one single pin to their chip. The only function: signaling that it was a Quadro or a Geforce - and alas, after the NV40 generation of chips the SoftQuadro route was no longer possible.
I used that trick on a $110 nVidia 6800LE, and the performance increase of 3DS Max was about 10-fold under OpenGL, while DirectX didn't give any performance increase.

Using the CPU for OpenGL or DirectX calculations is NOT a good idea. Both OpenGL and DirectX are highly parallel and work with streams and pipelines. The graphics processor has many computing components that work in parallel on those streams, higher end graphics cards can have 256 or more parallel working units, whereas a CPU has no more than 4 (8 if you count the i7 hyperthreading).
Okay, a single CPU core is significantly more powerful than a single GPU shader unit. But no CPU can beat 256 GPU shader units working in parallel.

In fact, there's a specialized non-realtime render engine, RT2, that doesn't render on the CPU, like Poser and most other applications do. It uses the GPU to do all the shader work, resulting in a 10 to 100-fold speed increase on even a modest graphics card.

So, DirectX is NOT the way to go. Neither is QuickDraw (or whatever it is called now), the Mac counterpart of DirectX. OpenGL is better.

The pen is mightier than the sword. But if you literally want to have some impact, use a typewriter

My gallery   My freestuff


MikeJ ( ) posted Sat, 31 January 2009 at 7:00 AM · edited Sat, 31 January 2009 at 7:02 AM

Quote -
As a graphics framework OpenGL is superior in design to DirectX.
Problem is, consumer PC graphics cards come with highly optimized DirectX drivers, while OpenGL drivers are sloppier. Which results in the perception that DirectX in itself would be better.

Yeah, I see what you're saying, and I've read alot of arguments from both sides. I think that's what I meant though, that DirectDraw is more supported via drivers for video cards, which is why it seems to work better. In some cases, but not all.
I use Deep Exploration CAD Edition for converting 3D files and batch converting image files (version 5.6), which has an option for DX (9.0c) or OpenGL. It also displays frame rate as well as having a benchmark tool. Some models do better in DX while others do better in OGL. If I'm having slow performance in one mode, chances are it will improve if I switch. Seems to depend on the amount of polygons, texture size, transparencies and so on.
I also use Deep Paint 3D to paint on models, as well as modo 302 for painting and sculpting. Deep Paint 3D uses DX, while modo uses OpenGL. While modo's OGL performance overall is VERY good, Deep Paint 3D blows it away, even with the same models with the same textures.

But then compare Poser to LightWave in OpenGL performance. There is no comparison, Poser is a whole helluva lot faster. Although LW overall implements OGL better for most things such as textures, transparencies and lights (plus has GLSL support), Poser is much quicker when moving, tumbling and moving the camera around.
Using the same video card, same driver set, so that seems to imply the differences between the two are not because of the drivers, but due to how the program is written to deal with it all.

So I probably shouldn't say DX is somehow "better" than OGL (although alot of people do say so). It's all in how the app is written to deal with it.

I know that a GPU is far more powerful than a CPU for OGL or DirectDraw, but if you watch the performance graphs and temperatures for your CPU and video card while manipulating high-demand 3D objects in OGL, you see your CPU being hit pretty hard (but only one core), while the GPU barely flinches.
So, the programmers ARE relying more on the CPU, it seems. Why is that?



svdl ( ) posted Sat, 31 January 2009 at 7:12 AM

Quote - So the programmers ARE relying more on the CPU, it seems. Why is that>

Several reasons and explanations.

1) possibility is that the graphics card is so much heavier powered for this kind of work that it is mostly waiting for datat that must be supplied by the CPU.

  1. Drivers and compatibility.
    OpenGL 2.1 can do a lot in GPU hardware that previously was only possible on the CPU. Many applications, however, are written for OpenGL 1.1, which had far less capabilities - and is over 10 years old by now. So the things that OpenGL 1.1 cannot do are handled in the CPU.

The specialized OpenGL drivers and libraries for 3DS Max and Maya that come with the professional graphics cards seem to indicate that situation 2) is the case. Many calculations that were done by the CPU in the standard environment are offloaded to the GPU by those specialized libraries, and the result is a striking performance increase while manipulating 3D objects.

The pen is mightier than the sword. But if you literally want to have some impact, use a typewriter

My gallery   My freestuff


MikeJ ( ) posted Sat, 31 January 2009 at 7:28 AM

Is it not true that the higher end quadros, while doing great in Max, Maya and LW, will cause a suffering in performance in apps like Poser and your average 3D programs?

Sure would be nice if everyone was on the same page and you could buy a measly $400.00 card and expect to see a huge improvement in everything, not just games.



svdl ( ) posted Sat, 31 January 2009 at 9:44 AM

MikeJ - it's true, sort of. If you're spending $4000 on a professional graphics card, you expect it to perform better than a $150 consumer card, and for most applications that is simply not the case. Becase that $4000 professional card has exactly the same GPU as that $150 consumer card, running at the same clock speed.
Some consumer graphics card manufacturers overclock the GPU and memory by default. This is never the case with professional cards.

Why are those professional cards so bloody expensive? Part of it is that the chips installed on those cards have been rigorously tested, while consumer card chips go through a quickie test or not test at all before theyr'e shipped. Another part of it is the specialized high end 3D libraries - software for a small market, which inevitably means high prices. And - but that only accounts for a small increase in price - most professional graphics card have more onboard memory than their consumer counterparts.

The pen is mightier than the sword. But if you literally want to have some impact, use a typewriter

My gallery   My freestuff


Privacy Notice

This site uses cookies to deliver the best experience. Our own cookies make user accounts and other features possible. Third-party cookies are used to display relevant ads and to analyze how Renderosity is used. By using our site, you acknowledge that you have read and understood our Terms of Service, including our Cookie Policy and our Privacy Policy.