Forum Moderators: wheatpenny, TheBryster
Vue F.A.Q (Last Updated: 2024 Nov 21 4:12 am)
Hi
Nice Tutorials You make.
I would use Vista 64 and 8gb as the main computer. 4gb of memory isn't that much, with Windows using .600mb to 1gb depending on your configuration. 4gb for the render only nodes is probably enough. Windows XP 32 is really cramped. I had a lot of crashes which my Visual C++ debugger
said were memory access violations. I moved Vue 6 ( I have Vue 7 inf Now) to Windows XP 64 and 8gb memory and didnt have any more crashes.
With Windows XP 32 its very tight Vue must have to do a lot of Gymnastics to cram everything in there. on larger scenes.
There is quite a big speed up in rendering on a network I never did an accurate speed test so I dont know if its a perfect ratio per cpu. but some of the renders Ive done finish pretty quickly compared to using one computer. I dont know if you would get very much with Windows 32 and just 2 cores. That might more trouble than its worth it might drop out of the render anyway.
I ve seen slower computers cant keep up and drop out. adding more memory to windows 32
does no good because Windows 32 limits the memory the app can use to 2gb (3gb with LAA set to on) I dont know if that works in Vue.
Yes You are correct about how the render nodes work.
Gary,
A guru I'm not but I have been working out the details of a renderfarm myself.
There is a breakeven point on network rendering speed. There is considerable communication between the host and the RenderCows which is overhead beyond the rendering. The example you cite refers to a timeframe starting at around 1 hour. I personally feel that that is about where you will begin to see significant advantage to network rendering. Your math is probably close, assuming the cows are relatively equal in performance.
My setup is Intel Core2 Quad 64 bit Vista, 8 gig memory, Q6600 (2.4 Gb) host, and 2-Intel Core2 Quad 64 bit Vista, 4 gig memory, Q8200 (2.4 G) RenderCows. Twelve cores total. I also have 2-32 bit Duo Core Intels on my LAN which can be included in the renders (bringing the total cores to 16).
The RenderCows seem to have adequate memory (4 Gig) for my work. One issue though is that the cores on the render machines are not fully utilized. The performance monitor indicates that only two cores are involved during rendering on the render nodes.
Prior to rendering, the scene data (all of it , I think) and all of the texture maps are transmitted to each node. Not only does this take some time (on a 10/100 Workgroup Switch) but large files with lots of textures could benefit from more memory.
Once rendering starts, the cows only process 1 tile at a time as directed by the host (which is why there doesn't seem to be a large memory demand on the render nodes after the textures, etc. are loaded).
My Vue is Version 7 Pro Studio+ (build 40707) so some of this information may not apply accurately to Infinite or xStream.
I hope the core issue is resolved in the future, since (local) rendering on the host utilizes all available cores at 100%.
I do allow the host to participate in the network renders and it handles the handshaking and rendering tasks easily.
Hope I hit all your points.
Charles
You also might want to take a look at the RANCH if you frequently do animation renders (we have a $50 free trial for new users). Now that you can define a render time limit for a render you are sure not to exceed whatever budget you have. Interestingly enough some of our customers have even stopped buying new computers when they discovered that outsourcing their renders was not as expensive as they thought...
Fred
I also would use the Vista64 machine as the main machine, the hypervue manager. The faster computer should be the main machine, because it is the machine that sends all data across the network, so the faster the better. But this also means this is the machine that should run Vue, and it looks like it isn't the case.
You definitely want the strongest computer to be your render manager, as Bruno said. The one thing to keep in mind with using Intel chips is that hyperthreading is -not- the same as having an actual, dedicated core with its own L2 cache; the OS can show all the 'core' windows it wants, but when you are stressing the processor, as you do in rendering, you can start running into issues of memory pool congestion, cache fragmentation, and signal collision, which will =really= slam the brakes on whatever processes are occurring. As long as the processes being hyperthreaded are relatively straightforward, they can help the rendertime. But there is a threshold where they do more to slow the works than help them. Unfortunately, those points are very dynamic, and come and go as dataflow changes. I'm sure you've seen and heard all the bitching about 'I got that new Intel chip with X cores and it doesn't do jack!' yadda yadda yadda. Much of that is from trying to use multiple cores on a single threaded app, but the rest of the problem is the HT's trying to do something they simply can't.....because they are stealing idle time and creating the illusion of being something they aren't.
I Use AMD Phenom Chips with XX50 Numbers The XX00 numbers are the older ones that had
a bug in them, causing lockups on heavy usage, unles it was patched to run slower in the mother board bios. XX50 CPUs have dedicated cores. The newer Intel I7 have dedicated cores also.
I also prefer a cpu that runs a little slower and quite a bit cooler for long lasting hard running renders. ( XX meaning the different model numbers of the CPU family).
One thing you need to get right and hasn't been mentioned yet, is the networking between the machines.
The best solution is probably to use wired Gigabit ethernet via small switch.
You should be able to find the cards, switch and CAT6 cabling at reasonable prices.
Note this can be used instead of, or in addition to whatever networking you currently have.
Oh my yes. The one place you can really get bitten is in the networking, as aside from hard drive access times, it is the one area -most- prone to be guilty of latency issues. And while wireless will work, you are much safter with hard connections. Make sure you check your motherboard specs, as quite a few boards with built in NIC ability already have gigabit capability....but the chipset detection methods will automatically step your network speed down to 10/100 if it thinks conditions will result in dropped bits. Overbuilding the network slightly almost never hurts your performance.
Another thing to consider is exactly how you intend to control your extra boxes. Renderslaves or not, you'll still need control capability. What many people this is most obvious is to just reuse the monitor, keyboard, and mouse that came with the computer. Even 3 computers will eat too much desk space with that setup. My preferred method is having one dedicated LCD monitor (a cheap one, as all you need is enough real estate to see the desktop clearly), keyboard, and mouse, and connect the render boxes to them through a KVM switch. A quick hunt through pricewatch.com will lead you to some real deals on KVM's with anywhere from 2 to 16 ports. Always have the option, if you go that route, to simply add at least a couple more computers by simply getting the KVM cable and connecting them to unused ports. You'll find planning room to grow into is a lot cheaper in the long run than having to kludge things together. And renderfarms are like potato chips; someone will offer you their 'piece of junk', which is perfectly fine, and....
Now you can use software to do the same thing, but as I have yet to find it needful, I haven't bothered doping out any of the remote control apps out there.
Oh, if your intended renderboxes have onboard video, then enable it and pull the video card. Do the same with the sound card, if any, and make sure you uninstall the drivers. The less power your computer pulls, the cooler it will run; 3 of my renderboxes run Athlon 64-3800 X2's, 2 gigs of DDR, and a hard drive, with no more than a 300 watt power supply. And they are stable. For that matter, one of them runs a RAID 0 with dual SATA drives on exactly the same power supply (it was an experiment. The data striping -really- makes a difference in speed. Of course if they have any corruption, you get to rebuild the array again. Whee.....)
Something else to keep in mind. More than just render apps can use a renderfarm. I have After Effects Pro, and on the boxes that can run it, I have the remote renderer for AE installed. All you have to do is set the basic network so your systems can share drives, specify a watch folder and direct the renderers to watch it, then dump a project into that folder and start the app rendering and the remotes will jump in and speed things along. You could also designate one of the remote computers as an archive, storing backups of your projects on a physically separate machine. It just takes adding another drive. For that matter, you could mirror the backup across all of the farm units, and have multiple archives....
This site uses cookies to deliver the best experience. Our own cookies make user accounts and other features possible. Third-party cookies are used to display relevant ads and to analyze how Renderosity is used. By using our site, you acknowledge that you have read and understood our Terms of Service, including our Cookie Policy and our Privacy Policy.
I am seeking some advice from some of you gurus. Currently I have a Intel Duo2 2.4 GHz computer 4 gigs mem and Win XP 32bit.
I am going to assemble a small render farm/node so I can render out some animations. I am going to get 3 computers to do the rendering. Their configurations are as follows: Intel Duo Quad 2.4 GHz, 4 gigs ram and Vista 64 bit.
Essentially I will have 14 processors to do the rendering. My questions are as follows:
With my single Duo2 computer, if it takes say one hour to render an animation, will the addition of the others make it exponentially faster? In other words, if two processors take 60 minutes, will 14 do the rendering in 4.28 minutes? or at least, reasonably close to that?
Since the rendering machines will only be used for rendering animations and still vue scenes, is 4 gigs enough for these machines? My understanding is all they do is receive info from my main computer, process the results and send it back for my main computer to put the image together. Am I correct?
Side note* I will upgrade my main computer in the next year to be much beefier but for now I am focusing on the rendering computers
I would appreciate any input as nothing is set in stone and I can still customize my choices.
Thanks much