Radelat opened this issue on Aug 15, 2004 · 48 posts
forester posted Tue, 17 August 2004 at 11:57 AM
You know, I'm not sure that some of the readers here are fully understanding what Dale B is saying. Its as if his comments are being ignorned, or the significance of what he's saying is not sinking in. My apologies Dale, but would you mind if I tried putting this out again? As Dale B pointed out, if Vue (or any 3d program) is designed to be aware of and written to take advantage of two cpu’s (multiple processing or "MP" as Dale uses), it will detect the hyper-threading as the existence of two cpu’s. Then, it will execute the program code written and compiled for dual cpu’s. However, here’s approximately what happens when it tries to use this code. The program code divides the data (rendering line data, in this case) to be processed into two streams of instruction sets – one for each supposed cpu. It then queues up the instruction sets for the two supposed cpu’s. Now it starts to process those instruction sets. CPU #1 – the only actual cpu in the machine, processes its first instruction set, and then waits for the results from the second cpu. At each pass, the program code, if properly written - and Vue does seem to be properly written, will discover that there is no “result” from “the second instruction set,” so it then will send the second instruction set into the only actual cpu. Then it will mate up the two results in the cpu cache, and then, depending on the size of the cpu cache, be forced to pass them on to the RAM cache. Then it starts the process again. The net effect of this “whoops, there's really nothing coming from the second cpu” set of processes is that the whole overall process will be much slower than expected. If the cpu cache is just an ordinary size, as in most of the Intel chips, the cache will be fully occupied with the mating up of the results of the two instruction sets and the process of passing this result out to RAM. This is partly why the Intel chips that have a 1MB cpu cache are discernably slower than the chips that have the 1.5 MB cache. And AMD, of course has the advantage here because they researched out and were able to implement a 2MB cpu cache at least a year and half ago. (Why your AMD Athlon XP's are fast and hold their own relative to the new P4's.) The net effect is that an Intel hyper-threading cpu is having to work at least twice and maybe 3 times for each CPU instruction set, whereas if hyper-threading were not detected as multiple cpu’s by the program, the program would be working only once for each instruction set. But, for all of us with multiple CPU’s, we certainly don’t want the e-on software re-writing the high-end 3D programs to assume and use only a single cpu. I don't agree that it is a reasonable conclusion that "the problem points back to e-on software'. Especially not when you consider when the main architecture of the program was written versus when these new cpu's were developed and released. In fact, how much sense would it have made for e-on to decide to re-write the Vue program, knowing what its programmers have known about how this 'hyper-threading" really works - that it's kind of a sham? And, there is a moral to this story in here somewhere - or maybe two morals.(sp?) One moral is that those of us determined to play out on the bleeding edge of the technology have a responsibility to have a basic understanding of how the technology works. Not overly technical - just basic. We can't just go around reading the benchmarks. I dearly love CPU Mag (Computer Power User) and eagerly wait for it each month, but I can't limit my knowledge to that level of superficiality. The second moral, following the heels of the first one, is to try to have a realistic understanding and expectation for how a complicated, robust piece of software like Vue is going to be able to react to that various "coping strategies" of the main companies caught up in the chip wars. Try to think about which are the most intelligent decisions e-on software could make in these circumstances. The AMD guys made a heavy and profound investment in their chip architecture 2 years ago now. And from this has flowed a strong but simple and clean line of chips. Intel is a big company and cannot restructure overnight when caught up short. So they have to cope someway, and buy time through a series of incremental changes that advantage only the light end purchasers. Let me tell you, Microsoft Office does just great on hyper-threading. What should E-on software and the other big 3D program companies do in these circumstances? And, if you think this situation is difficult, I urge you to take a look at what's been shaping up between Intel and AMD for the next significant round of chip wars. Talk about wierdness!