Sat, Nov 30, 11:06 PM CST

Renderosity Forums / Vue



Welcome to the Vue Forum

Forum Moderators: wheatpenny, TheBryster

Vue F.A.Q (Last Updated: 2024 Nov 30 5:12 am)



Subject: Getting the best out of Vue5i performance


DMFW ( ) posted Mon, 17 October 2005 at 12:19 PM · edited Sat, 30 November 2024 at 11:04 PM

I'm contemplating the purchase of a new machine and of Vue at the same time, having come into some unexpected money and feeling like splashing out on something to boost my rendering speeds :-) I'm in the Bryce camp at the moment but eco systems look like a lot of fun and I've been lurking in the Vue galleries and thinking about Vue for a while.... Rendering performance is an important consideration for me and I'd be kind of interested to know what people had experienced / recommended. I know this is a hard question 'cos it depends on what quality you're looking for but assuming money was no object what would look for in a new computer to maximise the performance of Vue5i? Does anyone know if the software takes native advantage of hyperthreading or SMP if it's available? What about graphics cards? And then assuming money is an object ('cos Vue may be infinite but my cash isn't!) what would be the most important features of a new system?


wabe ( ) posted Mon, 17 October 2005 at 1:01 PM

Obviously you already have a computer :-))) So how about downloading the trial version of Infinite and see how it suits you? As Mac person i of course would recommend ... But i will not and let others tell you what you need. Maybe one thing - you maybe want to look over to www.cornucopia3d.com in the Vue General section. We run a user benchmark there - you there can see what experiences people do have with their systems.

One day your ship comes in - but you're at the airport.


Rokol ( ) posted Mon, 17 October 2005 at 1:05 PM

Hyperthreading works with the rendercow, high quality renders at a good speed! Ecosystems are so damn cool. Worth the money alone.

CPU is the important bit, graphic cards are only applicable with the Open GL interface. I can render a 1024 x 768 pic in a few hours at 'highish' USER Quality thresholds & the rendercow setup, my system is P4 266ghz, 1Gig RAM. Hope this helps!


Cheers ( ) posted Mon, 17 October 2005 at 4:46 PM

DMFW asked - "Does anyone know if the software takes native advantage of hyperthreading or SMP if it's available?" It sure does, run as many cores or CPU's as you want ;) "What about graphics cards?" Well, I would tend to go for nVIDIA based chip sets. From experience of other apps (that rely heavily on OpenGL) I tend to be a bit shy of ATI cards (well the cards are good, it's just the drivers that seem to be flakey :/ If you have the money go for a nVIDIA Quadro chipset...but it's not essential. In order of priority of important features: CPU, memory, graphics card and then 64bit hardware compatability (64bit is coming and it is probably best to make the jump now, and most native 32bit software will run on it). Cheers

 

Website: The 3D Scene - Returning Soon!

Twitter: Follow @the3dscene

YouTube Channel

--------------- A life?! Cool!! Where do I download one of those?---------------


Vertecles ( ) posted Mon, 17 October 2005 at 11:24 PM

QUOTE - Cheers
"In order of priority of important features: CPU, memory, graphics card and then 64bit hardware compatability (64bit is coming and it is probably best to make the jump now, and most native 32bit software will run on it)."

What Cheers said!

With emphasis on the CPU. SMP (dual core, dual cpu) is the way to go. More cores=more threads=Much Much happiness...& way faster render times.
What a Rendercow does is merely distrubute the job among many CPUs. So..already having multiple CPUs in you rig is obviously a powerfull start.
Most graphics applications are multithreaded, which means they (should) really know how to take advantage of the extra CPU power.

It's a shame stupidity isn't painful.


Mister_Gosh ( ) posted Tue, 18 October 2005 at 8:37 PM

I think you get bigger bang for the buck spending your dollars on render farm machines. Four (cheap) machines is substantially cheaper than a quad-proc/quad-core single box. I just put together a farm of sub-$500 nodes and find it was money well spent.

It also isn't fair to claim that a dual core or dual proc system is the same as two nodes on a farm, since all the cores/procs on a single box have to work with the same memory modules (the farm nodes do have to pay the network overhead to get started, but in a complex render, that's a trivial percentage of the time). It gets worse if you're just looking at Hyperthreaded procs. While a HT proc will keep a single-threaded app like Poser from bogging the machine down, it won't approach anything like 2x speed increases on renders (in fact, in a time trial between my wife's PC and mine, I found the Vue render time of the two machine was only minutes apart out of hours...even though my CPU is a HT P4 processor running at a touch faster than her P4).

Finally, I think the most underrated value a render farm has is that it keeps your main computer free for other things. For example, I work in both Poser and Vue, so when I send my job out to the farm, I often contintue to do test renders and work in Poser, which is something I couldn't reasonably do if I were letting Vue do the render on my main box.


Cheers ( ) posted Wed, 19 October 2005 at 2:58 AM

Mister Gosh, I will just add to your comments that HT technology hasn't lived up to the hype...but I could also point you to software that has been optimised correctly with HT and multi CPU technology and produces a near linear fall off in render times as it is used with more threads....infact the software I'm thinking of, has been optimised to such a degree (with HT/multi CPU technology and in animation), that it is even capable of animating the amount of threads used over the course of a rendering project. Generally, limitions in HT technology/multi CPU systems isn't down to the systems themselves, but how well the software has been coded for it. Cheers

 

Website: The 3D Scene - Returning Soon!

Twitter: Follow @the3dscene

YouTube Channel

--------------- A life?! Cool!! Where do I download one of those?---------------


Mister_Gosh ( ) posted Wed, 19 October 2005 at 11:16 AM

Fair enough, but even the most parallelizable task will still hit the problem of a single bank of main memory (for a multi-core/cpu system) or a single L1 and L2 cache (for HT systems). If rendering were strictly CPU-bound, this wouldn't matter, but it is also a memory-bound operation once the scene hits a certtain level of complexity, at which point you start paying a penalty for not having seperate memory channels. All of this is besides the point, because if it is cheaper to build multiple machines than a single multi-core machine with the equivalent number of cores, and it is easier and less error prone to write code for multiple machines than something optimized for multi-core machines, then as a purely practical matter, multi-core isn't adding value to this specific problem. I'm not saying multi-core isn't valuable...I'm going to build myself a multi-core AMD box here soon, I'm just saying that if all you want to do is spend money on the optimal solution for this specific problem (speeding up render times on hobbyist or pro/am graphics software), your money probably buys you better results invested in more machines.


Singular3D ( ) posted Wed, 19 October 2005 at 11:58 AM

If you are using multiple computers be aware of the license question. You may need a license for each of them, but Infinte may come up with that option. Don't know, if there can be a problem with cornucopia store items. At least there are a lot of experts here, who can clarify that.


DMFW ( ) posted Wed, 19 October 2005 at 2:39 PM

Thanks for all the advice folks - it's a fascinating discussion. I hadn't considered the option of putting together a cheap render farm. Interesting idea. I guess at the end of the day I'd prefer my solution in a more powerful single box because this will also benefit the applications I might want to run that won't be "farmable" e.g. post processing apps (Photoshop) or even games. Although fast rendering is my main motive for buying a new machine, it isn't the only one. But what this does make me think is that it could be worth hanging on to my old machine and keeping it for possible use in a farm... Then again, if there is a disparity between the performance of the controlling machine and the subordinate farmed machine is the software that controls the farming smart enough to parcel out the work in proportion? (or does every machine on a farm get an even slice of work and you have to wait for the slowest?)


JavaJones ( ) posted Wed, 19 October 2005 at 4:27 PM · edited Wed, 19 October 2005 at 4:30 PM

HT is really just a way of taking advantage of otherwise "lost" CPU cycles due to pipeline bubbles and stalls, which are more prevalent on the P4 architecture due to its extremely long instruction pipeline and thus higher branch misprediction, etc. penalty. People often wonder why AMD hasn't implemented an HT unit and the answer is it wouldn't do much good, AMD CPU's tend to be running "flat out" more of the time because they're more efficiently architected. Generally this seems like a plus, but when you're multitasking the HT approach is a boon. Now that we have dual core I think it will be phased out, especially as Intel moves to CPU's based on their Pentium M mobile core, which itself is much more efficient (and shorter pipelined) than the P4.

In any case, because HT is just "taking up the slack" and is nowhere near the equal of a full 2nd CPU, it can never get much more than 20% of additional performance, unless the CPU was severely underutilized by the application in question without HT. This is the case in Terragen for example, where 2 TG threads on an HT P4 can net up to 60% performance increase - but this is the exception to the rule by far. And the trade-off in that case anyway is that a single Terragen thread runs abotu 25% slower than an equivalent Athlon system.

It is easier to wrote local multithreaded code than it is to write networked multicoded thread. This is for the obvious reason that local multithreading does not need to deal with the overhead and unreliability of networking and its other issues. In other words utilizing a local dual core or dual processor machine will be easier and more efficient than utilizing a networked farm of multiple machines of equivalent power. That being said the farm solution would generally be cheaper, as has been said.

As far as memory limitations, most rendering is not particularly memory bound. You can see in rendering benchmarks like http://blanos.com/benchmark/ http://www.tabsnet.com/ and http://tgbench.kk3d.de that similar machines with different memory speeds and sizes do not tend to make a significant difference. Rendering spends far more in CPU execution time than it does in transferring large amounts of data around (this is unlike gaming which tends to require a lot more data transfer).

You're also contending with the memory limits of the operating system, especially on the PC side. If you have 3GB of RAM, you can assume 1GB is dedicated to the OS and the other 2 to your application. Having more memory wouldn't help because the application can only use 2GB max on a non-64 bit Windows system. And let's not forget Vue isn't 64 bit yet anyway.

Also keep in mind that running a multithreaded application locally is more efficient memory-wise because the scene is already in memory - both threads can access the same memory pool. It's not like each thread has to separately load the entire scene or anything. So let's say you have a scene that requires a full 2GB of RAM - would it be more cost-effective to buy 3GB of RAM for a dual core system and multithread locally, or buy 3GB of RAM for two separate machines and do a network render? The answer, of course, is the single dual core machine. This starts getting less attractive as an option when you start looking at Opteron's or other high end CPU's of course, but now that there are mainstream dual core CPU's this isn't really necessary.

Also don't forget the power and space needs of a renderfarm solution. A single dual core machine will still have about half the power needs of two single core machines, and of course exactly half the space needs. :D

So to get back around to the original point, I'd personally recommend an Athlon X2 system right now. I'd recommend going for the highest clocked version you can, but don't worry too much about the cache size. For example you can get the Athlon 64 X2 4200+ at 2.2Ghz with 2x512KB cache (1 for each core) for $470 and the 4400+ with 2x1MB cache for $530 - $60 more. The 4400+ has larger caches, but in most applications that will not translate into a significant performance increase, so go for the 4200+. It's probably the best price/performance ratio for the X2's right now anyway, although the 4600+ at 2.4Ghz is tempting. Also note the 3800+ at 2.0Ghz and $347 is actually less than 2x the cost of a normal 2.0Ghz Athlon 64, the 3200+ at $190. That means you're no longer paying a premium for dual core, in fact it's more cost-effective (just as it should be). That is not counting the fact that you don't need to buy a 2nd case, CD/DVD-ROM, hard drive, memory, etc. Shared memory is only an issue if you're running 2 different applications, or the application you're running is not properly multithreaded. But if you do plan on multitasking with memory-intensive applications a lot, definitely go for as much memory as you can.

So the short answer: get an Athlon 64 X2 system with 3+GB of RAM. You'll be king of the block. :D If you don't want build it yourself (the cheapest and most versatile approach), these guys will build you a sweet machine on the cheap: http://www.monarchcomputer.com/ Save some money over Alienware, etc. ;)

  • Oshyan

Message edited on: 10/19/2005 16:29

Message edited on: 10/19/2005 16:30


Mister_Gosh ( ) posted Wed, 19 October 2005 at 8:28 PM

This analysis is great stuff, but it doesn't take into account some small practical matters. First, we're comparing multiple special-purpose boxes to one general purpose box. If you're going to build the one dual-core box as essentially a render node (no other apps running on it), then of course it will trump the single core box. However, once you're considering building a brand-new general purpose box, you're going to run other things on it. You're going to want to check mail and hit the web and do other things that will have a negative effect on your render times. Offloading to a node helps mitigate or eliminate those effects. A special purpose box can be built extremely cheaply, and there's more to it than just comparing the cost of single and dual core procs running at the same speed. You don't need fancy graphics, or any sound and you can shave a few Ghz off the CPU clock speed (because at certain price points, it makes more sense just buy more slower procs than to buy fewer marginally faster ones). Because you're going to run a single small drive and no advanced graphics, you can comfortably run with a small case and a much smaller power PSU than you'd do for a general purpose box (I'm running all my nodes in small micto-atx cases). You produce correspondingly less heat per unit (though it still will add up quickly, so if you don't have an out of the way place for the boxes, it will be sub-optimal). Lite-on makes a perfectly good DVD drive for $20. I don't want to get into anything resembling an argument, but I strongly disagree with the claim that networked code is harder to write than multithreaded code. I think the number of people writing networked applications (I'm including web apps here) vs. the number of people who write code that ever spins up a second thread (intentionally) bears this out. I'll leave it as an exercise to discuss what that means about the skill sets of most software engineers, but as a practical matter, that's the way of the world. FWIW, I'm not saying it's a solution for everyone, but I am saying people should seriously consider it.


JavaJones ( ) posted Wed, 19 October 2005 at 11:29 PM

I didn't say "networked code is harder to write than multithreaded code". I said "...networked multithreaded code...". That's a key difference. It's easy as pie to render two concurrent frames on the same machine with dual CPU's/cores, or on separate machines by manually launching the processes. But even in that case, when the "effort" is not really in the code and not programmed specifically into the application, it's still easier to do on a single machine than over the network. But rendering something like a single frame across a network (which is equivalent to what you're generally talking about when a renderer is locally multithreaded) is much harder. So when I say it's harder to write networked multithreading code I'm talking about A: rendering applications specifically - including networked web apps is completely meaningless as the methods of doing work are essentially different, with seriously different demands. And B: utilizing the same methods. In other words rendering a single frame on a single machine with 2 processors/cores is a lot easier to accomplish than doing the same over a network. Rendering multiple frames concurrently is similarly easier locally than remotely. The only time it's easier to write networked code would be if you're comparing rendering of a single frame with rendering of multiple frames (single frame on the multicore CPU, multiple across the render farm). But that's comparing apples to oranges. Most major rendering engines have been multithreaded for ages, but comparatively few include actual facility for network rendering of a single frame. As for dedicated render boxes vs. one bigger all around machine, having done a significant amount of benchmarking myself I can tell you that e-mail, web apps, etc. impact rendering applications very insignificantly. Perhaps a few percent at most, amounting to perhaps an hour or two lost over several days of rendering. I think I made it clear that there is no doubt a render farm will accomplish more work for less money. I certainly don't argue that. But if your current machine needs upgrading anyway, it makes more sense to get a nice main machine and later get a bunch of network boxes. It seemed like the original poster was already interested in a new machine. People should also be aware of the space and power issues involved in having a render farm. I think you can all imagine, even with small cases, how much space several machines would take up. And power can be a problem too. Around here a kilowatt of power use costs upwards of 20 cents per hour. If you have 5 machines, which will run at an average of 150 watts or so even with smaller PSU's, you're running 750 watts of power, and if you're using it consistendly that's over 500kw of power use per month, or translated into local power costs over $100/mo. That being said if you're serious about rendering, especially if this is part of how you make a living, there is no method more cost-effective than a bunch of cheap render boxes. So bottom line I think we're in agreement here. ;) - Oshyan


Mister_Gosh ( ) posted Thu, 20 October 2005 at 12:06 AM · edited Thu, 20 October 2005 at 12:17 AM

Okay, okay, we've both made general statements when the other guy was making specific ones. I was thinking of the general multithreading problem space vs. the general "distributed over a network" space. Sorry for doing that.

The power/heat/space issues are the ones most likely to cause people problems with a farm solution. I'd love to do some measurement to find out what setup costs more in power though. 150W per machine seems a reasonable estimate under load for a single spindle machine (perhaps a touch on the high end, but that's so dependent on the specific components chosen that I won't quibble), so it absolutely is a concern. A hobbyist isn't likely to run them 24/7, however (in my case, I spin them up for final renders and hibernate them otherwise). The single machine, of course, uses less power over wall clock time, but runs the render job slower. Since you're likely spinning at least one extra spindle, and running more fans and higher-draw video, it seems quite reasonable to me that the single box could end up costing as much or more per render.

One of the interesting things that might come out of this is that if the machine the original poster has now is reasonably adequate outside of render times, it might be worth building a single dual-core/big RAM machine with low-end video to act as a pair of render nodes. That keeps the original computer under low load (and at that point, if games aren't entering into the equation, it could continue to be valuable as a "secondary render" machine for tests and whatnot while big jobs go out to the big box). At some point, the bigger box could get a second spindle and a good graphics card and grow into the "main box". Kind of a Hermit Crab approach to machine upgrades. ;-)

Message edited on: 10/20/2005 00:17


DMFW ( ) posted Sun, 23 October 2005 at 7:07 AM

Well, I've committed myself to ordering a new machine now and have gone down the route of a single powerful box rather than a render farm (mainly because of other applications I'll want to run apart from Vue and for space considerations). The new box will be powered by dual core 3.2 GHz Xeon processor(s) so hopefully that will help Vue. I like the notion of a Hermit Crab upgrade but I've yet to decide whether I can utilise my old lap top in any secondary capacity. It'll be way underpowered compared with the new rig. Thanks for all the advice and an interesting discussion. My new box should arrive in the next couple of weeks and then I'll do as wabe suggests and download the Vue demo, just about in time for my birthday :-) Very much looking forward to playing with Vue...


Privacy Notice

This site uses cookies to deliver the best experience. Our own cookies make user accounts and other features possible. Third-party cookies are used to display relevant ads and to analyze how Renderosity is used. By using our site, you acknowledge that you have read and understood our Terms of Service, including our Cookie Policy and our Privacy Policy.