Fri, Nov 29, 6:05 PM CST

Renderosity Forums / Poser - OFFICIAL



Welcome to the Poser - OFFICIAL Forum

Forum Coordinators: RedPhantom

Poser - OFFICIAL F.A.Q (Last Updated: 2024 Nov 29 7:57 am)



Subject: Real time poser renders? (Xbox thoughts)


soulhuntre ( ) posted Sun, 09 December 2001 at 3:05 AM · edited Tue, 26 November 2024 at 8:32 AM

On a side note - I got to play with an Xbox today. I have to tell you - the real time stuff it can do is amazing, Halo is a well done game with some amazing 3D environments and people and Dead or Alive (3) has smooth shaded human models that have a surprisingly high polygon count with great lighting.

Full motion realistic humans in real time might be pretty close :)


Vaio_Con_Dios ( ) posted Sun, 09 December 2001 at 3:33 AM

Cool Sincerely, Vaio


aleks ( ) posted Sun, 09 December 2001 at 3:52 AM

it would be indeed possible even with the pc as it is now. poser spends half of it's resources to keep up that ridiculous environment. an average pc has more horsepower in it then xbox and better graphic chip, i'm sure that with an optimised code (both system and application) it could do miracles... trouble is, that pc has to do other jobs in the same time and xbox doesn't. unreal2 engine for the pc should look evenn better...


kupa ( ) posted Sun, 09 December 2001 at 8:46 PM

aleks, Poser does take some cycles to process the UI, but nowhere to the amount that you've eluded to. You can almost see the difference in performance without the UI by using the old P-O-2 trick to switch out to the old pre-Poser3 UI style. The performance isn't noticably different. There are some things down the road that we can do to enhance performance, but a 2-4 meg texture on top of 40,000+ poly single skinned bending mesh isn't a realtime possibility today. Setting up joints, bones, IK at the exporter level to meet a dedicated game processor requirements yields wickedly fast performance, but even so, the current crop of realtime 3D characters are a lot lower in poly count and texture size than typical Poser content. We're working towards some faster content for other systems for the future though. sincerely, Steve Cooper


soulhuntre ( ) posted Sun, 09 December 2001 at 10:22 PM

"poser spends half of it's resources to keep up that ridiculous environment."

Well... no. Not nearly half. Obviously the pretty interface takes an above average amount of resources but not nearly half.

"an average pc has more horsepower in it then xbox and better graphic chip"

Again, this is not really accurate - the only chips available for general PC usage that approach the nVidia processor for the Xbox are the Geforce3 and the new Raedon's (8500?) and they fall short...

"David Kirk: The xgpu is based on the next generation of geforce3 architecture, but is far more powerful - it incorporates technology that is in development for the next generation beyond geforce3. Ffor example, geometry processing performance is 2-3x that of geforce3. xgpu and mcpx also incorporate technology that was developed for our nforce platform processor product line, including the directx audio hardware, 3D positional audio, dolby digital encoding, and broadband support." - FunXbox.com Interview with David Kirk of NVIDIA

 

"Setting up joints, bones, IK at the exporter level to meet a dedicated game processor requirements yields wickedly fast performance, but even so, the current crop of realtime 3D characters are a lot lower in poly count and texture size than typical Poser content."

Absolutely Steve, the polygon count issue is one of the reasons that we are getting closer... but not there yet. I do think that the new GPU's ability to calculate extended virtual' polygons will help a lot. For instance the new ATI card can dramatically extend the effective polygon count of a smooth and round (human) mesh.

It's all gonna be really fun to see :)


aleks ( ) posted Mon, 10 December 2001 at 3:02 AM

hehe, steve, i didn't meant 50 % because i measured it, it's something like announcements from the software companies that the new software will be the very best, fastest and the only thing you'll ever need. just exaggeration to make my point. i'm aware that's not the half of the resources but it is a significant bit. since poser isn't relying that much on 3d cards as games, and since the processor is doing some fairly big part of the job there, it isn't really necessery to spent it's time for creating fancy apearance. one admires it for some days, after that it just gets in the way. offen enough people ask in forums how do they get to this or that function not able to understand the philosophy behind the ui, being used to "standard" windows' interface. on the end, i guess, it's the matter of personal taste, but imo, the interface is not very ergonomical (both for working speed and personal working style). "Setting up joints, bones, IK at the exporter level to meet a dedicated game processor requirements yields wickedly fast performance, but even so, the current crop of realtime 3D characters are a lot lower in poly count and texture size than typical Poser content." we are comming very close to it. epic announced that it's new unreal-warfare-engine handles up to 170.000 polygons, fully textured (ok, not an 4k pixel texture, but it summs up), with particle systems, dynamic light & shading. it doesn't need an xbox, it works on normal, stock pcs with modern cards. when you add that a game also use lot of pc resources for an user input and most important, ai, while poser spends most of it's time in waiting for me to make the input. also, and that's what i meant by "horse power", xbox runs on a 733 mhz pentium. at least here in germany it's impossible to find a pc system slower than 900 mhz, and with new 2 ghz pentium around the corner, it would be almost the tripple performance. as far as i remember, xbox uses (only) an gforce2 graphic chip? on the other side, xbox needs only ntsc or pal output with 50 or 60 hz where most pcs run on at least 1024x768 resolution with at least 75 hz, which at least doubels the graphic performance. imo, we'll get nowhere near "real" humans as long as we use bitmap instead of parametric textures, be it for speed or quality reasons.


soulhuntre ( ) posted Mon, 10 December 2001 at 3:25 PM

"and with new 2 ghz pentium around the corner, it would be almost the tripple performance. as far as i remember, xbox uses (only) an gforce2 graphic chip?"

Games are far more limited by the graphics processor than they are by the CPU. The Xbox GPU is significantly advanced above the GeForce3... you might want to follow the links in the post I made above... there is some good info there.


aleks ( ) posted Mon, 10 December 2001 at 3:39 PM

yes i read already what he said, and it's new to me. i was thinking the same as the guy asking questions - gforce2 or 3. frankly, it's hard to believe that they can make the graphics 2-3 time faster only by changing architecture around it. besides, it's significant - is it 2x or 3x? it's probably one of those benchmarks that works only for some special hardware... btw, how the hell are you making italics? :)


soulhuntre ( ) posted Mon, 10 December 2001 at 5:53 PM

"frankly, it's hard to believe that they can make the graphics 2-3 time faster only by changing architecture around it."

Well, there are several factors at work. First, the chip itself in inherently faster. Next, the memory bandwidth between the GPU and it's memory are much faster.

2x or 3x is hard to say in a situation liek this... we are talking multiple layers of programmable shaders and lighting issues... any deterministic benchmark would be fairly useless as a general measure of performance.

It is enough to say that the Xbox has a GPU that is significantly beyond what is generally available for PC's for now.

"it's probably one of those benchmarks that works only for some special hardware..."

One of the advantages of the Xbox is that the hardware is known in advance. Allowing for optimizations like that to be useful.

"btw, how the hell are you making italics? :)"

I edit my posts in a HTML editor then cut & past the HTML into the message window, checking off the box that says "do not apply formatting".


MallenLane ( ) posted Tue, 11 December 2001 at 12:56 AM

Well, I will say that current "off the shelf" geforce3 based cards can handle an animating, 40k polygon figure quite easily, and can handle a few 4k by 4k textures using hardware-based compression also, quite eaily. They can also render color, specular, bump, and reflection mapping, and shadows in hardware through the use of vertex shaders. One of the demos (Zoltan) that comes with the most GF3 based cards is just as polygon/texture intense as the average Poser scene, and even renders using specular and phong shading maps. Lets not forget they were rendering Final Fantasy: Spirits Within in realtime at the last siggraph on a card of that architecture, with scenes consisting of 1.5 million vertices rendered per frame, at an average of 2.5fps, using vertex shaders for bump and diffuse. The average poser scene falls far below those requirements on hardware.


kupa ( ) posted Tue, 11 December 2001 at 12:05 PM

I'm always learning here, it's good to read the comments from folks with far deeper experience about 3D hardware acceleration and game specific processors. On this end, one of the challenges with a small development group such as ours, is that adopting hardware support potentially increases our risk of incompatibilities. We want to offer enhanced performance, and are looking at ways to tie into hardware. As I have deeper information and can do so, I'll share where we are heading. Steve Cooper btw, I've always wondered about italics as well ;-)


MallenLane ( ) posted Tue, 11 December 2001 at 1:02 PM

There are very few companies implementing the newest features in varying amounts. Softimage comes to mind. But those companies are basically implementing them because of their close ties to the realtime-3D game market, making it faster for artists to preview their work without exporting to the game engine costantly. Some programs have used the transform unit, and almost all programs use either opengl or directx for normal viewport acceleration. I think for the average 3D program the pixel and vertex shaders now making their way into current videocards make more sense for a realtime 3D paint program, where the idea is to get a fast realtime preview of a texture effect. At least until ART cards get faster at hardware raytracing, or consumer cards reach that point. =)


aleks ( ) posted Tue, 11 December 2001 at 1:46 PM

"As I have deeper information and can do so, I'll share where we are heading." (aarrgghhh - no italics - haven't got a clue about html code) steve!! does it means it's still not ready!? what about my christmas gift? my wife asked me what do i want for christmas and i said poser5 before she finished "...mas". oww, man! now i have to go for vue! ;-)) seriously: i'm glad that you are seemingly going for the compatibility. though nvidia is defacto standard in graphic accelerators it doesn't mean that it'll stay so - see voodoo. with graphic cards having extra power supply and water cooler i don't know if we hit the ceiling of what is possible (for consumer market) right now. multiple graphic chips? parallel processing? i don't know if windows xp supports this... i think it wouldn't be too hard to implement such a feature as "quick rendering" where, during moving of the figures you see only something like the best preview in poser as it is now and only when the movement stops, it gets "quick-rendered". with shadows, transparencies and full texture. i believe that directx is capable of doing so even if it takes couple of seconds. as mallenlane said, see final fantasy promotion at siggraph.


soulhuntre ( ) posted Thu, 13 December 2001 at 1:42 PM

"On this end, one of the challenges with a small development group such as ours, is that adopting hardware support potentially increases our risk of incompatibilities."

This is true and a definite problem. I think the easiest way is to support it in a non critical way.

For instance, Max will let you enable OpenGL and Direct3D shading in a view port - but it does not consider such views as critical or definitive of what the final output will look like. The "active shade" window in Max is a good example - if you turn it on and your hardware supports it than all is well. If not then you leave it off. No harm done.

I would be very interested in seeing what happens if you guys just dumped the poser scene onto the Direct3D library. Let it handle the details of optimizing for a specific card/driver - it's what it does (it's all it does! to paraphrase Terminator 1).

On the Mac side QuickDraw3D is similar.

It might be an eye opening hack :)


Privacy Notice

This site uses cookies to deliver the best experience. Our own cookies make user accounts and other features possible. Third-party cookies are used to display relevant ads and to analyze how Renderosity is used. By using our site, you acknowledge that you have read and understood our Terms of Service, including our Cookie Policy and our Privacy Policy.