Mon, Nov 11, 4:14 AM CST

Renderosity Forums / Vue



Welcome to the Vue Forum

Forum Moderators: wheatpenny, TheBryster

Vue F.A.Q (Last Updated: 2024 Oct 26 8:50 am)



Subject: Upgrading system


gaz170170 ( ) posted Sun, 06 July 2003 at 4:47 PM · edited Mon, 11 November 2024 at 4:12 AM

I currently run Vue on a 1Ghz AMD Athlon XP with 512 Mb pc133 sdram and a geforce4-ti4400 graphics card. I want to upgrade to an Intel P4 2.4 ghz chip, with 800 Mhz FSB and multi-threading capabilities. Am I right in thinking this works like 2 processors, so that, in Vue (with rendercow) I can render animations quicker? Obviously more memory is better, but my budget is limited, so will 1 Gb of DDR400 be sufficient to start with? Any advice gratefully received


draklava ( ) posted Sun, 06 July 2003 at 7:16 PM

Gaz - multi-threading is a function of the operating system aka windows XP vs. windows 3.1 so your old system is as capable as your new one in that regard... You will see a performance increase with the faster processor, faster FSB and DDR memory and 1gb should be quite sufficient and you can always add more later. If you use rendercow and set up your old machine and your new machine to render a scene you should see a pretty good decrease in render time for animations (almost half). For still images it doesn't matter because rendercow sends a full frame at a time so for a single frame it either renders on your main computer or the old one... anyway - throwing extra computer(s) on a render even if they are older systems (like p2 etc) will really help improve your render time


forester ( ) posted Mon, 07 July 2003 at 10:02 AM

Gaz, draklava is correct - multithreading is a function of the operating system. It is a process for passing process instructions to the cpu. But multi-threading, by itself will have no real effect in decreasing in your rendering time. For the two machines you are describing above, only the faster CPU and the faster bus speed will have an effect on rendering times. These should decrease your rendering time by approximately 30-34% percent, however. the amount of RAM really doesn't have any effect on rendering (only the CPU and bus speeds), but it will significantly affect your ability to model and place objects in your scenes. More RAM is a major help here, too. There are only 2 ways to speed up rendering in Vue - two cpu's (or more via linked machines and rendercow) or a faster processor. OF the two, more CPUs offer the most speed gain. A dual processor motherboard with 2 Pentium III 800 cpu's still outperforms a single Pentium 4 2.x cpu, but only in rendering. There are still a lot of those dual 800's widely available - for cheap prices. For all those thinking of a renderfarm (two linked PCs), this is most economically efficient way to get a major gain in rendering.



Dale B ( ) posted Mon, 07 July 2003 at 12:41 PM

About the only thing I would add is that installed memory -will- affect the rendering time of that specific render node, but -only- depending on the load you place on it. The three box rendergarden I'm playing with are all running 256megs each; one is PC133 (and old Athlon &00 SlotA), the other two are PC2100 (An XP-1800 and an XP2500+...yah, I know I'm choking the Barton core down. The cheapy DDR2700 stick was =bad=...so it's only running at about 70% ability at the moment). With only Vue materials, even the K7-700 is a help. If I do Poser imports with high res textures, then they all bog down due to the need to use the swap file so heavily. If you are wanting to play with distributed rendering, here's a couple of things to consider. (1)One of the handiest toys I've gotten is a KVM switch, which will allow one keyboard, monitor, and mouse to work with, in this case, up to 4 seperate computers. Do a hunt on Pricewatch, and you can find a fully electronic one for around $40. (2)Use a seperate switch for the renderfarm. Keeping all the dedicated function boxes on the same switch simplifies your wiring. (3)Desktop cases. You can get them for about $40 (no power supply), and there is plenty of room. (4)Less is better. The fewer things you have to plug in, the better off you are. Get the el-cheapo video cards; all you need is enough to render the desktop; an accelerator is just using power you are paying for. (4)Consider color coding everything. Each renderbox I have has an LED lighted cooling fan added, and the cat5 cables for that box matches the fan color. That way I avoid having to tag and bundle cables to keep things traceable. Another thing to keep in mind is that you need to decide which unit will be the controlling unit. Vue becomes the manager for the rendercows, and you are flirting with almost certain termination if you try and do anything with that unit. You might want to consider leaving your Vue install on your current system, making a new box, and simply having a 'cow on it. That way you -should- be able to do other things while a render is in progress.


gaz170170 ( ) posted Mon, 07 July 2003 at 12:53 PM

Wow, thanx for the info guys...very detailed and helpful

To forester and draklava, if what you are saying is correct, in that multithreading is an operating system feature, then why are Intel pitching these new chips as multi-threading? May I draw your attention to this from the Intel website:

`Intel extends Hyper-Threading Technology to a variety of desktop PCs, with the new Intel Pentium 4 processor, featuring an advanced 800 MHz system bus and speeds ranging from 2.40C to 3.20 GHz. Hyper-Threading Technology from Intel enables the processor to execute two threads (parts of a software program) in parallel - so your software can run more efficiently and you can multitask more effectively.

`. The page in question is here: http://www.intel.com/products/desktop/processors/pentium4/index.htm?iid=ipp_desk_proc+highlight_p4_ht&

So is it a function of the operating system, facilitated more by this recent development?

Please excuse my ignorance. I have been an AMD customer for a while and never really considered an Intel until now.

As for a renderfarm type scenario, how do I connect the two systems? Through a network setup?

Thanx again


draklava ( ) posted Mon, 07 July 2003 at 1:16 PM

I had not seen hyper-threading before - sounds like intel marketing spin :)

hyper threading as they describe it is still different than multi-threading. Multi-threading as we talked about is a function of the OS but more specifically, the programmer has to create their program to use multiple threads. Most windows software, games, art programs etc are written to be multi-threaded - however it is still up to the programmer to implement this...

from what the intel things says, it looks like they have figured out a way to get the chip to process 2 instructions at once which could be pretty cool if it works like they say...

right now a bottle neck with a processor even if it is a nice new P4 3.0 ghz is that it can only do one thing at a time no matter how fast it does it so this would help get around that barrier.

I would check out some sites like
www.tomshardware.com
www.sharkeyextreme.com

they probably are all over this new technology, bench marking it etc to see if it is really worthy.

as far as the render farm, Dale B had a thread a while ago where we discussed the easiest way to make a home network.

My vote is to use a $50 linksys 4 port router - it allows all your computers to connect to each other and the internet and is very easy to set up.

There are some other brands too like NetGear and others that do the same thing and have similar features. I can dig up a link but you should be able to search buy.com or some site like that for 4 port cable modem/DSL router.


Dale B ( ) posted Mon, 07 July 2003 at 1:24 PM

gaz170170: Doing a search on some of the hardware boards, such as Toms Hardware Guide, Anandtech, and for flavor The Inquirer, could probably give you an idea of just how much of a FUBAR things are in the hardware world.... :P Intel's HT is very much an issue of 'marchitecture', or marketing buzzword. It boils down to being a trick that forces a single CPU to run more process threads than it was normally designed for, thus -simulating- a second processor. You have to be running an MP aware OS, not because this magically makes a second CPU appear, but because that type of OS can handle the increased thread load. You also have to have applications that are coded specifically to use HyperThreading before you get more than a few (less that 5%)percent increase in performance. In a lot of ways, it's like SSE-2. That looks really cool when you can run it....but the only app out there that really uses it is a couple of benchmarking programs. 99.999% of the software out there doesn't know what SSE-2 is, and so ignores it (and that is actually one of the pipelines that HT takes advantage of; if an application actually used that pipe, then HT performance would degrade significantly). And what Intel doesn't mention is that no matter what you trick the processor into, it is still only in possession of one L1 cache and one L2 cache. If you trick the chip into running two thread at the same time, then each will need dedicated space in the caches to store instructions and execute simple operations. So at best you have to divide your cache sizes in half; and cache size is one of the bigger variables that can affect how a chip performs. You also have to have a motherboard who's BIOS is enabled for HT before it will activate (HT has actually been in the chips for some time, but Intel didn't tell or release the BIOS updates until it needed something flashy). I would advise sticking with AMD for a few good reasons. Cost, and compatibility. The P4 has something like 5 different socket formats in the market, and none of them are cross compatible. Odds are very good that you would get a P4 board that had little or no ability to use a faster chip. AMD's socket A is used by the whole range of the AthlonDuron chips, and the motherboards are mostly backwards compatible through the whole spectrum (note I said mostly...). As for the renderfarm, it's simple. Connect the NIC cables to the same switch, make sure you -do not- have firewall software active, start a render, activate the HyperVue rendering, and Vue will search the network for any rendercows. I use Win2k Pro and there is no configuring of the OS needed; and none in XP either. The worst you have to do is know the networking name of the computers, in case you have to add them by name. It's the easiest network setup I've ever seen....


forester ( ) posted Mon, 07 July 2003 at 1:51 PM

Boy you guys are good! Exactly right!



Dale B ( ) posted Mon, 07 July 2003 at 3:08 PM

:D Never go into the CPU Holy Wars with less than full armor and the sharpest sword you can carry! BTW forester, I -really- like the water objects you have in the MP. I'm playing around with animating the rain objects in Vue. Interesting effects...... I'll get something up once I'm happy with the results. Oh, and an addendum to the above. The most recent versions of Max, Maya, and Lightwave implement SSE-2, and anything coded natively to run on the Itanium server chip. I'll wait for the Hammer chips and X86-64 code, personally. -That- is impressive, a chip that runs either current code or 64 bit code in native mode, no emulator needed. And considering that software houses are reporting conversion to X86-64 and recompiling existing programs as taking one programmer 1-2 days....we might just see a 64 bit version of Vue and the cow, when AMD gets Athlon 64 out this fall...


forester ( ) posted Mon, 07 July 2003 at 3:45 PM

Thanks for the note Dale. I'd like to see the animation when you have something you like.



gaz170170 ( ) posted Mon, 07 July 2003 at 6:17 PM

Jeez, what have I started? Lol Thanx for the info guys, I will check out some of the above links for more info, but I suspect you have already told me all I need to know. PS...still going for the Intel :-)


Dale B ( ) posted Mon, 07 July 2003 at 8:46 PM

Oh hell, this is nothing. Pete didn't get into it for one, and there are 3 or 4 more technogeeks that sat this one out (or are on vacation... :P) As for your chip choice..... Call it a Learning Experience.... >:)


Thalaxis ( ) posted Mon, 07 July 2003 at 8:53 PM

HyperThreading is most definitely not marchitecture. It's Intel's (hokey) name for Simultaneous MultiThreading. It is a hardware implementaiton of multithreading; basically, what it does is allow the processor to keep more execution units busy when running multithreaded software. In Cinema4D, it almost invariably results in a 20% performance increase. If you want information about technology, avoid THG. If you want the fastest gun in the west, get a Pentium4. Number two is an Opteron. The best value is still an Athlon, though. Especially if you stuff it in one of those little Shuttle boxes, which make building your own machine almost trivially easy. :)


PAGZone ( ) posted Tue, 08 July 2003 at 1:46 AM

How about a Dual 2.0Ghz G5 Mac with Dual 1000Mhz busses? and DDR 400 RAM? I can't wait to see how these systems fair with Vue... they will be available next month. From the demos I saw, they are faster then a dual Xenon 3.0Ghz... Regards, Paul


Thalaxis ( ) posted Tue, 08 July 2003 at 7:41 AM

The demos were bogus. The G5's will be fast, of that there's no doubt, but Apple apparently doesn't think that that is enough to sell them, so they went out of their way to hobble the x86 P4's so that their machine would look like it is faster. Apple's setting a lot of false expectations with those demos; they should have compared them to G4's instead, because being less fast than something as ridiculously fast as a high end P4 (and I'm not talking about clock rates here) doesn't have to be a bad thing... unless you go out of your way to set that expectation. A good rule of thumb: take the G5's clock speed, at 1000 MHz, and look at the performance of a P4 at that speed. That's about what you should expect on average from the G5... and that's hardly something to scoff at.


PAGZone ( ) posted Tue, 08 July 2003 at 12:20 PM

Thalaxis: I think you better read the facts. There was nothing bogus about these demos. All the PC backers always claim foul when someone out does them... This is a very debated subject and IBM and Apple, as well as the third party that did the benchmarks proved that nothing was hobbled. They used the Open source GCC compiler accross the board. Sorry but you are misinformed. The coming months will prove these specs. Adobe, and other companies have already gone on record stating that the G5 is the fastest computer to run their applications. This doesent have to do with their benchmarks but rather Adobe and others own testing. Does this mean that the G5 will run everything faster? No, some applications are written specifically for a perticular processor. But talking to a few developers that have had these systems for a bit revealed that they are a leap forward in personal computing power. "A good rule of thumb: take the G5's clock speed, at 1000 MHz, and look at the performance of a P4 at that speed. That's about what you should expect on average from the G5... and that's hardly something to scoff at." This is a statement I would expect from a PC user that doesn't have a clue about the technical aspects of the G5. Talk to IBM's engineering staff about the 970 (G5) tell them your rule of thumb and they will laugh at you. The G5 runs on a 1Ghz bus while the fastest p4 runs on an 800mhz bus. The bus is always the bottleneck and IBM/AMD/Apple have eliminated that. Besides that the 970 processor architechture is true 64 bit and is a RISC chip, unlike the P4 that is a cisc and 32bit. Even though the clock speed is lower the chip is faster. Do the research before you make statements that are fasle...


draklava ( ) posted Tue, 08 July 2003 at 12:39 PM

Uh oh - now we have a holy war! PAGZone is right that it is kind of like comparing apples to oranges in the fact that the Pentium is a CISC based chip while the PowerPC is RISC based which means that a the same C++ code compiled for PowerPC reduces down to less machine code instructions then if it was compiled for Pentium (I might be talking out of my bum here but I think that is the difference between RISC and CISC) that being said, clock speed is less of an issue when comparing a P4 to a G5. Bus speed is significant too but the diff between 800mhz and 1000mhz is probably pretty negligiable! Still, a benchmark is a benchmark. If you render the same complex Vue scene on a Mac and on a Windows box and get out the stop watch, one will likely be faster than the other. I guess it's all a matter of personal preference - the rest when it gets down to it is just a big pissing contest ! This is coming from a previous hardcore Mac evangelist who has switched to windows although OS X and a G5 is probably a tasty combo :)


Thalaxis ( ) posted Tue, 08 July 2003 at 12:50 PM

How typical... even IBM's own performance estimates bear out the rule of thumb I provided -- and their estimates show the PPC970 in a better light than Apple's. It says a lot about anyone who believes such an obvious piece of deception. They didn't even TRY to hide the fact that they were crippling the competition in order to look good, instead they made excuses for it. Sad. I suspect that the value of this thread has ended.


draklava ( ) posted Tue, 08 July 2003 at 12:53 PM

Don't lump me into that category! I have not seen any of the links that you guys are slinging from Apple, IBM or Intel... Where is mightyPete when we need him? We need to get back to the Windows 98SE is better than Windows XP thread :)


Thalaxis ( ) posted Tue, 08 July 2003 at 1:11 PM

"Don't lump me into that category!" I wasn't. I was referring to anyone so blinded by religious fervor that they actually believe that there was some merit in Apple's comparisons. If you want to see how bogus they are, start with spec.org.


gaz170170 ( ) posted Tue, 08 July 2003 at 2:33 PM

Sorry to interrupt guys, but whats a Mac? Lol I didnt realise I was about to open such a can of worms as this. Another question, if I may... I wasn`t looking to upgrade my graphics card, having spent a hefty sum a few months ago on a Geforce 4 ti-4400. But, I have been advised that, as this is AGPX4 and not AGPX8, which the motherboard I am looking at supports, the graphics card with hold the system back a bit. Is this true? I may be tempted to get a shiny new ATI Radeon 9700 Pro, but need to justify it to my bank-manager (my girlfriend....lol)


draklava ( ) posted Tue, 08 July 2003 at 2:38 PM

Again I may be speaking out of my bum but I think that the GeForce 4 will be quite sufficient. Especially if you are rendering to disk. A better video card is going to make creating the scene quicker (if you have OpenGL turned on) but will not affect rendering when rendering to disk. Of course the 9700pro is pretty sweet and you may have to make an excuse to get it or wait a month or two. Isn't there a 9800 out now? I've been using a laptop lately so I'm out of touch with the latest and greatest upgrades to desktops.


Thalaxis ( ) posted Tue, 08 July 2003 at 2:40 PM

AGP 4x vs AGP 8x isn't a big deal. The big deal is just that the 9700 Pro is a LOT faster than the fastest GF4ti cards. Whether or not it is worth the money I don't know; I'll have to get back to you on that when I try out my 9800Pro (yes, it's faster than the 9700 Pro, but not by all that much, so it would be a good indication, I think). That won't happen until I buy my XPC or iDEQ and my processor (one or two weeks).


Dale B ( ) posted Tue, 08 July 2003 at 2:46 PM

Well, once I have some PC-2700 DDR for the 3rd box on the garden, I plan to do a 98lite install, run a benchmarking Vue scene that will eat cycles and bang on the ol' memory management, then reformat and do the same with Win2k, just to see if the good version of 98 SE can keep up with the older boy. I want to make sure I have good, stable hardware underneath before I test, and as they have to be tested on the same set up to have any meaning.... BTW, Totally Off Topic (sorta) and for those who use 2k and Xp and dream of cutting the fat out of them... They have a very limited beta of XPlite out now for owners of 98lite to give a spin. And it is =sweeeeeeeeeeeeeeet=. It runs as a native windows app, and the first thing it does is give you a toggle command to switch off the Windows file protection scheme. After a reboot, you can remove some of the fluff from 2k and XP (they limited it to things like media player, fonts, wallpaper, etc, so that there was no chance of damaging the system). And once you're done, you can switch the file protection back on. So far the beta has worked on XP Pro and Win2k pro with nary a bobble. Shane Brooks has even said he's considering coding an NT desktop replacement DLL, that would be along the lines of the Win95 desktop...as in no active component. Sigh. Now I just have to copy the IE 6 key at the top of the registry stack and keep it for when I get rid of that bugfest abortion of a so-called web browser, to fool all these proggies that swear they -have- to have IE to even install (they lie, btw...except for MS Office, latest editions). It's amazing just how stable Windows can get when you scrape the 'ease of use' crapola off of the actual OS. And HT is is admittedly not marchitecture, in that it does exist. However it is marchitecture whenever they lay the schmooze out to where you have to be technically literate to understand that it is -not- some magic replacement for a genuine dual CPU set up. Using unused execution pipes is a good idea, but if a coder writes his stuff to -use- those pipes, then HT is a fizzle. Well, except maybe at the prefetch and branch predictor stages.


Thalaxis ( ) posted Tue, 08 July 2003 at 2:52 PM

"Using unused execution pipes is a good idea, but if a coder writes his stuff to -use- those pipes, then HT is a fizzle." Actually, if a coder writes his stuff to USE those pipes, the potential gains are considerably better... assuming that you're talking about an intelligent coder. The fact that in most rendering applications, HT produces a non-trivial improvement means that it adds value to the P4 owner. So however much you criticize Intel's marketing deparmtment, there is nothing wrong with HT; in fact even Intel's own hype about it indicates that one should expect up to a 30% increase in performance with it... which has been borne out in the real world.


Dale B ( ) posted Tue, 08 July 2003 at 2:53 PM

As far as the cards go, they haven't even started stressing the AGP 4X bandwidth yet. I've thrown all the Unreal's, Enter the Matrix, and the newest Tomb raider at my GF4-TI4400, and it has handled them all, at full effects & 40+ fps. I wouldn't worry about the card, and the first engine that -may- actually strain the GF4's (Doom 3) hasn't made it out the door yet. You =might= want to check into the 9700's Open GL support, though. Nvidia hasn't gotten the implementation right yet...or more accurately, they skew it for gameplay, not standards compatibility. A good, compliant Open GL card will help with Vue for certain.


PAGZone ( ) posted Tue, 08 July 2003 at 11:42 PM

Attached Link: http://www.veritest.com/clients/reports/apple/default.asp

religious fervor? I don't think so. I own several PC P4's and a Mac. I am merely stating that to say they crippled the tests is not accurate at all. You are the one that is blinded and closed minded. Like I said in my earlier message, you are challenging the tests and specs, so the burden of proof is on you. Prove that it is intentionally crippled. Read the report for yourself that was done by a third party. Besides most of the nay-sayers that want to believe it was rigged are all looking at the Spec test, which is not the real world of computing. That is why they also went the extra mile and compared the systems using real world applications. Applications like Photoshop, Mathmatica, Logic Audio, Luxology and others. These companies own developers and product specialists did the testing themselves. Anyway I was not trying to start a PC vs. Mac war, I like and use them both so I could really care less what people think. It just gets annoying that whenever you suggest a Mac to someone some PC dedicated, Mac slinging, intel zealot, thinks they know everything about a machine they know nothing about. Regards, Paul


Thalaxis ( ) posted Wed, 09 July 2003 at 8:55 AM

"Like I said in my earlier message, you are challenging the tests and specs, so the burden of proof is on you. Prove that it is intentionally crippled. Read the report for yourself that was done by a third party. " No, the burden of proof is not on me... the proof is far too obvious to be worth wasting further time with a mindless fanatic. In any case, the blindingly obvious decpetion wasn't even necessary to sell the computers to any but the least intelligent, so unless the goal was to insult the intelligence of the target market, I don't see why they bothered.


PAGZone ( ) posted Wed, 09 July 2003 at 11:06 AM

Attached Link: http://members.cox.net/craig.hunter/g5/

Oh Brother. I am not a mindless fanatic. And I don't think Apple is the King. The current G4's can't hold a candle to my 3 Ghz P4. However the G5 is another story that even a NASA engineer considers to be worthy. And it seems that you are nothing but a message troll that loves to insult people and ignorantly make accusations that you obviously know nothing about. Believe what you ignorantly want to believe... I've got better things to do then argue with an obvious Mac hater. Vue On! Paul


Thalaxis ( ) posted Wed, 09 July 2003 at 11:25 AM

You don't read very well, do you? I never even tried to imply that the G5 wasn't worthy. Quite the contrary, I flat out stated that I think that it will be. The fact that I know better than to believe the marketing lies doesn't make me a mac hater. Besides, if you actually READ what NASA had to say about it, you'd see that it bears out what I've been saying all along.


Privacy Notice

This site uses cookies to deliver the best experience. Our own cookies make user accounts and other features possible. Third-party cookies are used to display relevant ads and to analyze how Renderosity is used. By using our site, you acknowledge that you have read and understood our Terms of Service, including our Cookie Policy and our Privacy Policy.