Forum Coordinators: RedPhantom
Poser - OFFICIAL F.A.Q (Last Updated: 2024 Nov 27 5:12 pm)
What do you mean by "memory"? System memory, or video memory? For video memory I know fairly certainly that, if the color depth and resolution is the same, then the video memory used will be the same. For system memory I think it'll actually be a little bigger for the procedurally-generated bitmap, because a) you still need somewhere to stick that array of pixels and color depth before it is moved into video memory; and b) you need space to do the math to generate the bitmap in the first place. Hard disk space, procedurally-generated textures are obviously going to be smaller, when dealing with large textures.
Most nodes don't increase memory use. An ImageMap node does, because it holds the entire image. Any math you do on the image does not generate new images. Instead, each point is calculated on the fly and then discarded when no longer needed. So a purely procedural texture (using no ImageMap nodes) is always going to take up less system memory.
Video memory is irrelevent except for the preview in P7. The renderer is purely software - you don't even need a 3D graphics card to do software rendering.
Also, I think I read that in P7 they know how to cache images better, breaking it up into pieces, so the total memory will not always go up with the total image bytes your dealing with in ImageMaps.
And the "simplified" view of a material is just that - a simplified view. There are still nodes behind it, and an image is an ImageMap.
Renderosity forum reply notifications are wonky. If I read a follow-up in a thread, but I don't myself reply, then notifications no longer happen AT ALL on that thread. So if I seem to be ignoring a question, that's why. (Updated September 23, 2019)
Quote - Most nodes don't increase memory use. An ImageMap node does, because it holds the entire image. Any math you do on the image does not generate new images. Instead, each point is calculated on the fly and then discarded when no longer needed. So a purely procedural texture (using no ImageMap nodes) is always going to take up less system memory.
Are you certain? How did you come to know that? I don't think that's very likely. As you reposition a camera, the graphics engine would have to regenerate the entire bitmap from scratch for every increment of the camera, or I expect any change in lighting. From a programmer's perspective I certainly wouldn't do it that way, I think it'd be stupid. Granted, the preview bitmap can be much smaller than the rendered bitmap, but that's still a ton of wasted cycles that you need for other things, like moving the polygons themselves around.
Quote - Video memory is irrelevent except for the preview in P7. The renderer is purely software - you don't even need a 3D graphics card to do software rendering.
True enough, but it may matter a great deal when the scene has many bitmaps to preview and the graphics card has a small amount of memory (e.g. some dinky old card with 32mb). If there's enough texture data (and depending on how the card works, this may be shared with 3d data) then the card may be unable to display the preview. I think it's kind of a moot point in respect to this question, since I'm pretty sure both situations would use equal video memory.
I was refering to system memory, not video memory. And I'm using p6 not 7.
Available on Amazon for the Kindle E-Reader Monster of the North and The Shimmering Mage
Today I break my own personal record for the number of days for being alive.
Check out my store here or my free stuff here
I use Poser 13 and win 10
I don't think that's very likely
What did you mean by "that'? I'm not sure which idea you're objecting to.
As the renderer works on each pixel, it determines which objects must be queried for a color, and exactly what the position is in UV space, in 3D space, the Normal of the surface, etc. It sets all these variables up, U, V, P(x,y,z), N(x,y,z), etc. It then calls the shader. The shader is then interpreted with those parameters and returns a color. This is performed over and over again. As each color is returned from the shader, the renderer stores that in an accumulation buffer. The size of this buffer grows in proportion to the size of the render and has nothing to do with the amount of data needed to perform the shading calculations.
The data needed to perform the calculation for each node produces one number or one color. Once that node feeds its info to the other nodes, it is no longer needed. I suppose there is space set aside for each intermediate result, but even with 100 nodes, that's going to be puny.
I'll grant you that each node in the shader that is currently being edited is occupying memory for its little preview, but those are discarded as soon as you selected a different prop and your not in the material room. I verified that with a shader of 131 nodes in memory, they increased Poser's memory use by about 4 megs. This makes sense since each is 100 by 100 by 3 bytes for RGB times 131 = 3,930,000 bytes. When I selected another object, the memory was released. This is not rendering memory - it is only needed while you're editing in the material room, for the previews under each node.
As you reposition a camera, the graphics engine would have to regenerate the entire bitmap
What bitmap are you talking about? The only bitmaps in memory during rendering are those referenced by ImageMap nodes and the render itself. The different perspective views of the various textures are generated point by point, not in response to camera position changes. There are no images that are prepared in advance because of the camera position.
Nor are there any pre-calculated bitmaps because of lighting. That is the whole point of a shader - it receives all the variables of light strengths, light positions, light directions, object position, surface normal direction, camera position and so on. Given all these numbers, it computes a color. Then it gets called for the next pixel, and the next, and the next. Each value returned from the shader is stored in the accumulated render.
The answer to how I come to know that is that:
I loaded a one-sided square into a scene. I scaled it to just fill my preview. My render was was 653 by 653 pixels. I rendered the square with with my cloth shader that has 131 nodes in it. Memory at idle was just over 87 megs, as the render proceded it rose to 117 megs, an increase of 20 megs during the render.
I then saved the render and replaced the shader with an ImageMap containing the rendered cloth. After repeatedly selecting the ground and clicking "reload textures" i got it to unload the memory and return to the original 87 meg. I then rendered - memory rose to 119 megs, and increase of 2 megs. The raw bytes of an image that size in memory is 1.7 megs, plus you could expect a bit more for book keeping and image caching for interpolation. Thus the rendered image was identical, but there was an extra 2 megs of information in memory during the render.
I repeated this 10 times for both scenarios and got the same measurements every time.
Is that convincing?
Renderosity forum reply notifications are wonky. If I read a follow-up in a thread, but I don't myself reply, then notifications no longer happen AT ALL on that thread. So if I seem to be ignoring a question, that's why. (Updated September 23, 2019)
Trust bagginsbill on this one! ;)
I'll back him up:
Procedural shaders literally map on-the-fly so there is no need to store additional memory beyond the variables.
Ask yourself one question: Why is it that you can have thousands of shader nodes (if not using image maps) and Poser doesn't choke with the "Out of Memory" error, yet add a few strategically sized image maps and bye-bye Poser?
C makes it easy to shoot yourself in the
foot. C++ makes it harder, but when you do, you blow your whole leg
off.
-- Bjarne
Stroustrup
Contact Me | Kuroyume's DevelopmentZone
For your amusement, here's a procedural brute; 283 nodes to make a Glenn plaid that is not exactly the same anywhere! I love giant procedural shaders. No images, no memory problems, and no annoying visible repetition from a tiled texture. You can load a hundred of these into Poser.
You can also tell, from the zoomed in views, that there's no way you could get that kind of detail from an image. It would have to be over 20,000 pixels wide to render so clean up close.
Renderosity forum reply notifications are wonky. If I read a follow-up in a thread, but I don't myself reply, then notifications no longer happen AT ALL on that thread. So if I seem to be ignoring a question, that's why. (Updated September 23, 2019)
I'd love to get that Pixels3D, er, Poser shader node system into Cinema 4D! ;) My experience writing shader/material/texturing code for 3D is extremely limited. The skeleton of one was started - it has the basic node interface fleshed out, but none of the UVW mapping and shader maths are in place.
Wow, they had graphics in computers back in 1984? [duck and cover] ;D
C makes it easy to shoot yourself in the
foot. C++ makes it harder, but when you do, you blow your whole leg
off.
-- Bjarne
Stroustrup
Contact Me | Kuroyume's DevelopmentZone
They? no, but I did. Way before the IBM PC got the EGA card (8-bit color) I had a Sanyo "PC Compatible" computer with a 16-bit color display 640 by 480. It was awesome. I got deep inside the BIOS and was writing rendering stuff in assembly language. I had great fun and many sleepless nights. One of my renders ended up on the front of PC Week or whatever it was called then.
Did it ever occur to you to use Poser to generate texture images, which you can then import into other apps that don't have cool shader implementations? I can make a color map and bump map for any of my cloths into a seamless tile which you could use in other apps. I just tell the shader to not do any of those variation things that would make the tile non-repeating. This gives me the power to generate an infinite number of cloth weaves, colors, etc. while having the power of a much better rendering platform like Modo.
Renderosity forum reply notifications are wonky. If I read a follow-up in a thread, but I don't myself reply, then notifications no longer happen AT ALL on that thread. So if I seem to be ignoring a question, that's why. (Updated September 23, 2019)
The way that my plugin works, the materials are loaded 'in situ' - The Poser file is parsed and everything (as much as I have been able to) is recreated - morphs, master/slaving, rigging, parenting, conforming, and texturing. The plugin works directly on the Poser content and scenes, referencing the available Runtimes. Poser itself is not part of the process. So the shader nodes would either need to be baked on the way in or emulated - the extra step in emulation would not be far from the baking process (except for the GUI) and far more useful. :)
C makes it easy to shoot yourself in the
foot. C++ makes it harder, but when you do, you blow your whole leg
off.
-- Bjarne
Stroustrup
Contact Me | Kuroyume's DevelopmentZone
I'm talking about the preview engine - you don't move the camera in rendering :)
This is a digression from the OP's question but - as preview quality improves, memory usage is going to go up.
Quote - I'll grant you that each node in the shader that is currently being edited is occupying memory for its little preview, but those are discarded as soon as you selected a different prop and your not in the material room. I verified that with a shader of 131 nodes in memory, they increased Poser's memory use by about 4 megs. This makes sense since each is 100 by 100 by 3 bytes for RGB times 131 = 3,930,000 bytes. When I selected another object, the memory was released.
A) How accurate is the preview for that shader setup? If it shows a reasonably accurate preview instead of a flat white, then that image data has to be stored somewhere in memory. Well, it doesn't have to, but recalculating it from scratch is wasteful as long as there is sufficient available memory for the whole preview.
B) What happens when you move the camera in the preview for that shader setup?
I understand what you're saying, that during rendering, it makes no sense to cache image data beyond what you need for the current pass. And what I'm talking about doesn't apply too much to Poser 6 since it has a very primitive preview engine, and really not terribly much to Poser 7 either (not much better).
Poser has a render engine? :lol:
It's prolly based solely on programming knowledge of the 80's of last century.
Well, if you look with a debugger what Poser is doing then you may notice that it really recalculates from scratch if changes occur in the material room.
If you move the camera in preview, not mat. room, then it doesn't recalculate everything that has procrdural shaders applied to it. Most of them are not even visible in preview mode.
It only recalculates things that are displayed in preview, e.g. no raytraced reflection.
I have not noticed yet that it recalculates for example a tile pattern applied to some surface that is visible in preview while I move the camera. Ergo I assume it stores such stuff in memory and does only another calculation for render.
Somewhat disappointing is the performance of P7 in material room. With many shaders consisting of 100+ nodes and no or small image maps I experience noticeable delays now that I had not with P6 on a much weaker system in 2005.
To answer the memory question:
Most images require more memory then procedural shaders. For some texture sized 1000 x 1000 you can place many shader nodes. I would say around 25 to 30.
A) Probably not very accurate at all - as noted, some shader results don't get passed to the preview. Since the preview is a product of the graphics (Sreed, OpenGL, DirectX,. software, etc.), the preview display will be a factor of the support in any of these. For instance, most graphics card OGL drivers can only simulate lighting from a limited number of lights (4 or 8 mostly). If you have 10 lights and are using OpenGL, you'll only see lighting from as many as can be handled in the preview.
Remember that the GPU on graphics cards is engineered to be extremely fast at doing one thing - rendering 3D graphics (and 2D, but that's easy) - and has vertex, triangle, and texel buffers. Rendering is, of course, done on the CPU using system memory.
So, basically, if the preview were to attempt to 'render' the shaders as in a real render, it would use the same process - procedural UVW mapping pixel by pixel.
C makes it easy to shoot yourself in the
foot. C++ makes it harder, but when you do, you blow your whole leg
off.
-- Bjarne
Stroustrup
Contact Me | Kuroyume's DevelopmentZone
If it shows a reasonably accurate preview instead of a flat white, then that image data has to be stored somewhere in memory
I think you're operating under a few incorrect assumptions.
First of all, an in-memory image always is the same size regardless of what color the pixels are. An RGB image with 10,000 pixels occupies 30,000 bytes. If it has an alpha channel, it occupies 40,000 bytes. I agree that GIF or JPEG FILES on disk can be smaller when all white instead of many different colors, but that has to do with compression, and compression is not involved in BITMAPs, which is how images are represented in memory.
Second, what you see on your screen is stored in the video memory, and particularly in your hardware accelerated preview is calculated and stored in the frame buffer memory of the video card, and regardless of what may have temporarily been consulted to generate the contents of the screen, the frame buffer is always the same size.
Third, under P7 the OpenGL interface can be used (if your card supports it, mine doesn't) to generate per-pixel shading just like or very close to what Firefly renderer does. P7 translates the nodes into shader language for the 3d engine in the video card, and it intereprets them point by point. There is no image in hardware procedural shaders, just like there is no image in software procedural shaders. If your shader uses an image, then that image is downloaded in the video card's texture memory and takes up additional space. Usually the texture is resized to be smaller before loading it onto the card's memory. But if it isn't resized, then it will occupy the same amount of video texture memory that it occupies in system memory. So when dealing with hardware-accelerated image generation (into the frame buffer), the amount of shader data stored in the card is pretty similar to what is needed in system memory for a software-only render.
Renderosity forum reply notifications are wonky. If I read a follow-up in a thread, but I don't myself reply, then notifications no longer happen AT ALL on that thread. So if I seem to be ignoring a question, that's why. (Updated September 23, 2019)
Zarat: **Somewhat disappointing is the performance of P7 in material room. With many shaders consisting of 100+ nodes and no or small image maps I experience noticeable delays now that I had not with P6 on a much weaker system in 2005.
**I feel your pain. I generate my shaders outside of Poser and I have to go into the material room to load them. Even if there are not hundreds of them, there are terrible delays involved when you have merely a dozen nodes if many of them are noise-based. Poser does not cache the mat-room preview information, so first I have to wait for it to generate previews for whatever nodes are currently there, then I get to load my new shader, and then I have to wait again while it generates all the little previews again. This happens even if all the previews are collapsed and I'm never going to see them. I wish Poser had a display option to disable mat-room previews - then it would be lightning fast.
This is why I prefer to use mat-pose files. I can generate and load those lickety split because I don't have to go into the mat-room to load them. Unfortunately, mat-pose files don't work on free-standing props - only figures, or props parented to a figure.
Renderosity forum reply notifications are wonky. If I read a follow-up in a thread, but I don't myself reply, then notifications no longer happen AT ALL on that thread. So if I seem to be ignoring a question, that's why. (Updated September 23, 2019)
Quote - First of all, an in-memory image always is the same size regardless of what color the pixels are.
Yes, I'm well aware of that, and that's been my basic point the whole time. We're really on different wavelengths here.
You're confusing me - you used this phrase: ***accurate preview instead of a flat white
***indicating you think that an accurate preview has to be stored somewhere, whereas a flat white image does not or can be stored in a different amount of memory. If not that, then what were you getting at regarding the contents of the preview?
I think maybe you need to restate your basic point since I don't seem to be responding to it.
Here's what I assumed your basic points were:
**For video memory I know fairly certainly that, if the color depth and resolution is the same, then the video memory used will be the same. **
If your talking about hardware accelerated pose-room preview, then as I just pointed out, that's wrong. if you use images in your shader, then those images have to be copied into the hardware as well to do hardware-generated pose-room previews.
If your talking about software generated pose-room preview, then that's wrong because you qualified it with "if the color depth and resolution is the same" - video memory use is not in any way changed by software generated pose-room preview, color depth and resolution have nothing to do with it.
If your talking about rendering, then that's wrong, because again the hardware is not involved.
If your talking about mat-room previews that's wrong because those are not hardware accelerated.
**For system memory I think it'll actually be a little bigger for the procedurally-generated bitmap, **
And that's wrong as I've shown you because procedural textures algorithms are smaller than images in all cases. Even a 100 by 100 bitmap is bigger than the data needed to represent a procedural shader. This is true regardless of whether the procedure is running in your PC or on your graphics card.
Renderosity forum reply notifications are wonky. If I read a follow-up in a thread, but I don't myself reply, then notifications no longer happen AT ALL on that thread. So if I seem to be ignoring a question, that's why. (Updated September 23, 2019)
Quote - I think maybe you need to restate your basic point since I don't seem to be responding to it.
Since we're really not understanding each other, and this is way off course from the OP's question, I'd just as soon not. Don't worry about it, no hard feelings here.
None here either.
Whenever I post I try to make everything I say clear, accurate, and relevant. I kept responding and elaborating because your questions and comments indicated I had failed in one of those respects.
No worries :)
Renderosity forum reply notifications are wonky. If I read a follow-up in a thread, but I don't myself reply, then notifications no longer happen AT ALL on that thread. So if I seem to be ignoring a question, that's why. (Updated September 23, 2019)
Come on! I managed to stray much further off topic. Where's my award! ;)
This should seal it (this might be best done in an IM?): bagginsbill, I see that BodyStudio 'recreates' the Poser shader 'networks' (as they call them) in Cinema 4D. I can't see how this is possible except through baking - as Cinema 4D does not have a node tree system for its Material/Shader interface. I don't have BS and would be interested in knowing what the result is - a bitmap?
C makes it easy to shoot yourself in the
foot. C++ makes it harder, but when you do, you blow your whole leg
off.
-- Bjarne
Stroustrup
Contact Me | Kuroyume's DevelopmentZone
I'm willing to go off topic - and give you an award!
Here's what I get from googling BodyStudio:
BodyStudio translates your Poser shaders to Maya Shading Networks, and assigns them to the correct polygons on your characters. For easy transition, all of the base Poser shading parameters are translated into an equivalent Maya Shader Network, including:
Reading that, I would say it doesn't do any of the procedural nodes. What it does is translate numerical and color values on the root node as well as maybe track down a few ImageMaps connected to those things. They bothered to bullet list all the root node input channels and associated image maps plugged in, but if I were proud of this achievement I would have said that all the noise generators (fBm, Cellular, Fractal Sum) and all the math and so on were faithfully reproduced. Since those would absolutely be the hardest thing and just noting the colors and image maps involves just reading the P4 part of the shader (that doesn't represent nodes at all - just colors and file names) I'd say they do not in fact translate the Poser node based shader at all.
Note the phrase
"*translates your Poser shaders to Maya Shading Networks"
*whereas you said "the Poser shader 'networks' (as they call them)". In fact, that is not what they called them.
So I think they're talking about what you see in the mat room on the "Simple" tab. Not the full-blown procedural shaders that need a node network. I'm not impressed.
Renderosity forum reply notifications are wonky. If I read a follow-up in a thread, but I don't myself reply, then notifications no longer happen AT ALL on that thread. So if I seem to be ignoring a question, that's why. (Updated September 23, 2019)
I accidentally pasted the blurb for Maya instead of Cinema 4D but the other marketing blurb is otherwise identical.
Renderosity forum reply notifications are wonky. If I read a follow-up in a thread, but I don't myself reply, then notifications no longer happen AT ALL on that thread. So if I seem to be ignoring a question, that's why. (Updated September 23, 2019)
Oops, must have read Networks incorrectly - long live lexdysia! ;)
I see your point completely. Well, even I can do that - and do in my plugin. So, I'll need to dust off that rudimentary shader tree system of mine and start figuring out the node algorithms (mathematics) and how then to apply that through the C4D Material for the texturing process (render). Another C4D plugin developer has achieved something similar (not for Poser in anyway), but it is an in-house product and thus proprietary (no code to be offered on their C4D shader tree system).
Mathematics and code for some node types is provided in the aforementioned textbook, but is woefully incomplete (see Amazon for comments). Could you provide some good links or resources (texts, papers - IEEE and ACM are accessible here), please?
C makes it easy to shoot yourself in the
foot. C++ makes it harder, but when you do, you blow your whole leg
off.
-- Bjarne
Stroustrup
Contact Me | Kuroyume's DevelopmentZone
Attached Link: http://www.cs.unc.edu/~debug/238/prog3/noises.h
I haven't seen too much real math - or even better real implementations. But check out that link.Renderosity forum reply notifications are wonky. If I read a follow-up in a thread, but I don't myself reply, then notifications no longer happen AT ALL on that thread. So if I seem to be ignoring a question, that's why. (Updated September 23, 2019)
This site uses cookies to deliver the best experience. Our own cookies make user accounts and other features possible. Third-party cookies are used to display relevant ads and to analyze how Renderosity is used. By using our site, you acknowledge that you have read and understood our Terms of Service, including our Cookie Policy and our Privacy Policy.
Which takes more memory to make a texture, an image or a calculated texture(where you have all the nodes and the exclamation points in the simple tab). Will it depend on the size of the image vs. the number of calculations?
Available on Amazon for the Kindle E-Reader Monster of the North and The Shimmering Mage
Today I break my own personal record for the number of days for being alive.
Check out my store here or my free stuff here
I use Poser 13 and win 10