Forum Coordinators: RedPhantom
Poser - OFFICIAL F.A.Q (Last Updated: 2025 Feb 15 11:01 am)
Quote - Note: I had to turn the wibbly around. Were you rendering the backside? We already know that's a problem.
That is straight out of the modeler, I hadn't brought it into Poser at all at the point it was posted.
Whichway I think I understand those but I'll keep quiet and leave those to Stewer or Bagginsbill, who are in a position to know for certain.
some of these renders are looking wayyyy better now than in the early going. IMVHO a small amount of noise is desirable in light-to-dark gradients on smooth surfaces, but those wishing to decrease it may consider further increases in samples, if possible (e.g. 100,000; 250,000). maybe at some point the noise would be acceptable to most users, when not augmented in APS.
Quote - Whichway I think I understand those but I'll keep quiet and leave those to Stewer or Bagginsbill, who are in a position to know for certain.
Not me, hell no. I only know the few tidbits Stefan has thrown out.
Stefan, fess up. Why are there two cache settings?
Renderosity forum reply notifications are wonky. If I read a follow-up in a thread, but I don't myself reply, then notifications no longer happen AT ALL on that thread. So if I seem to be ignoring a question, that's why. (Updated September 23, 2019)
Quote - some of these renders are looking wayyyy better now than in the early going.
Maybe, but these are also pushed WAY over maximum settings that are exposed to the standard controls. This is also about the simplest composition that will make practical use of GI - add reflection and/or transparency and render times go really vertical. Some serious improvement needs to be done on the renderer, this is not a settings quality problem. It shouldn't take an hour to render a 500x500 pic of a couple of planes and a sphere lit by a single light.
Is smoothing on there? Smoothing and Poser and raytraced effects - pick 2.
Renderosity forum reply notifications are wonky. If I read a follow-up in a thread, but I don't myself reply, then notifications no longer happen AT ALL on that thread. So if I seem to be ignoring a question, that's why. (Updated September 23, 2019)
That's a good point, hadn't thought of that (should have). I'll turn it off and try again. It really annoys me that raytraced shadows are evaluated on the base geometry and not smoothed geometry.
There ya go.
Not just shadows but all raytracings are screwed up by smoothing and also by displacement. I gave up on publishing a good ocean water involving procedural displacement for waves because the reflections in any depression of a wave go into hyperspace and you just get areas with reflections missing altogether. I only demonstrate water with procedural bump because of it.
Renderosity forum reply notifications are wonky. If I read a follow-up in a thread, but I don't myself reply, then notifications no longer happen AT ALL on that thread. So if I seem to be ignoring a question, that's why. (Updated September 23, 2019)
This may be partially improved, because I did a render of that fantasy suit with reflection turned on for all parts and it actually came out unexpectedly well:
http://www.renderosity.com/mod/gallery/index.php?image_id=1929687
I have a suspicion that the glow map I added (Ambient in a line along the edges, pink color) helped also. Anyhow I'm gonna go conk out, if SR1 gets done overnight let me know.
I think I'm going to go back to the rendering model I'd worked out for myself over in a Challenge thread at DAZ. It will take some modifications, but let me start with the following self quote:
"A ray is traced from the camera until it intersects the gold. For diffuse and specular behavior, the program looks from the intersection point back to the lights, whose locations are known. From the angles and surface properties, an intensity and color of that surface patch are calculated. If there is a Reflection Node attached to the material, a more complicated process goes on. Starting from the intersection point, a ray is launched into the surrounding environment in the unique reflection direction determined by the ray from the camera and the angle of the surface at the intersection. [Angle of reflection = angle of incidence.] This new ray intersects some other surface. For that intersection point, the whole diffuse/specular calculation is done from that new position using the properties of the new material. The result from the secondary ray is used as the incoming light intensityat the first interaction point - the one for the ray from the camera. Note that if we send out rays from the secondary intersection, we can get reflections of reflections. That can get out of hand; the reflection bounce limit puts a stop to this.
As I understand it, the Gather Node does something similar for diffuse surfaces. Here, the direction of the reflected ray is not unique, so the program shoots out a random number of rays in random directions and averages the resulting intensities that come back. Doing that more than once really slows thing down, so don't do that. [It doesn't, I think.]
Ok, that's made everyone's eyes glaze over. I'll go quietly back to rendering. Sorry."
After some further thought, let me phrase it like this - Start with a ray shot from the camera to hit a surface in the scene. It's going to bounce (it has to bounce at least once or this is really boring and simple diffuse or specular) and there are two strategies as to how to aim the next ray. One, which we'll call "raytrace" is used if the direction that the next ray heads off in is known explicitly. This would be the case either if this is the last bounce and it has to head to a light, or if the surface it's coming from is reflective or refractive, in which case the direction angle of the ray is determined by simple optics and the orientation of the surface. The other strategy, which we'll call "indirect illumination" is used if the next direction is not known. In that case, a bundle of rays is shot in random directions and the result of the bundle is the average of all the rays. Note that we can't actually evaluate the results until we know where the secondary rays land, and feed that information backwards along each ray train.
Now we shoot the secondary rays until they hit something (if they escape to space, the answer for that ray is black). At this point, we repeat the algorithm. If we hit something reflective or refractive, we shoot a new "raytrace" ray, whereas otherwise, we make a new bundle of "indirect illumination" rays. This repetition continues until we run out of either "raytrace" or "IDL" bounces along the current ray chain. In order to get any light to go down the ray chain, we have to land on a light as the last step. That's why the last ray in the chain has a known direction (ok, maybe one final ray for each light source) and is a "raytrace" ray. At this point, we can follow the chain back from the light and evaluate the intensity along each ray until we reach the original impact point.
Ok, that covers the Bounce parameters and the Samples parameter. Assuming Intensity is a simple multiplier to effect the balance between bounced and direct light, that leave "Irradiance Sample Size" and the two caches.
Clearly, the complicated tracing of rays get expensive fast. So, rather than do it for every pixel, it is only done at sampled points. The "Irradiance Sample Size" is used to distribute the samples across the rendered image. Say, it set something like how many pixels minimum must be between samples. (It's clearly not that simple as the distribution is concentrated in regions that need finer sampling, but increasing the value does lower the density of red dots, which I'm assuming are at least related to the sample points.) These values are calculated on the first pass and stored in a cache file.
On the second pass, rays are shot from the camera to each pixel (have to do all of them to get the picture!) If the ray hits a reflective or refractive surface, the "raytrace" mechanism is called again and a chain is constructed only of "raytrace" rays until a diffuse or specular suface is hit or the bounce number expires. For the last surface hit, the illumination determined from the first pass is interpolated from the values stored in the cache and used to evaluate the shader at the last point. This tracing has to be done again or the reflections will not be clearly defined. If the camera ray hits a diffuse or specular surface, the illumination is immediately interpolated from the cache and used in the shader at the hit point. That takes care of the "Irradiance Sample Size".
This leaves the two cache values. They are defined to be the number of luminosity points explicitly calculated devided by the number of points interpolated from the cache. This makes a modification of the second pass above. Instead of always interpolating form the cache, a random choice is made between explicit and cached such that the average ratio is the specificed "Irradiance Cache" parameter. For the cached points, interpolation is done; for the others, the whole calculation of Pass 1 is done. The reason for having two parameters is because a reflection is far more sensitive to the eye than a diffuse illumination, so the reflective component might want many more explicit computations than needed elsewhere.
Ok, that is my theory, which is mine and which I call My Theory, and I'm sticking with it for at least the next ten minutes or I think of something better or somebody tells me what the algorithm really is.
Do not take any of the above very seriously - it has to be wrong.
Whichway
Quote - ALMOST completely free of artifacts, there are two that I can see on the thigh but everywhere else is good.
It looks like Poser now renders skin much better than Carrara does.
www.youtube.com/user/ShawnDriscollCG
Quote - Now let's make the quality sliders maximum (actually 99%). Now instead of a few artifacts of very large size, we have tons of artifacts of much smaller size. Sort of an improvement, and the overall lighting intensity is all right, but wow, that looks like crap. 20 minutes of render time for this one, vs. 3:25 for the last one. I expect render time to go up with high quality settings, but this is pretty much unusable. I might as well go back to faking it with IBL (and until patching, I will).
In theory, # of ray bounces come into play when mirror and glass materials are used. But for common drywall materials, Poser 8 needs a lighting quality (or accuracy) setting to remove the artifacts. I don't have Poser 8, so I don't know if it has such a setting. I use modo for rendering Poser figures.
www.youtube.com/user/ShawnDriscollCG
I sure don't know the units. I know it alters the average distance between samples and that's all. Only Stewer knows.
Renderosity forum reply notifications are wonky. If I read a follow-up in a thread, but I don't myself reply, then notifications no longer happen AT ALL on that thread. So if I seem to be ignoring a question, that's why. (Updated September 23, 2019)
Quote - I think I'm going to go back to the rendering model I'd worked out for myself over in a Challenge thread at DAZ. It will take some modifications, but let me start with the following self quote:
"A ray is traced from the camera until it intersects the gold. For diffuse and specular behavior, the program looks from the intersection point back to the lights, whose locations are known. From the angles and surface properties, an intensity and color of that surface patch are calculated. If there is a Reflection Node attached to the material, a more complicated process goes on. Starting from the intersection point, a ray is launched into the surrounding environment in the unique reflection direction determined by the ray from the camera and the angle of the surface at the intersection. [Angle of reflection = angle of incidence.] This new ray intersects some other surface. For that intersection point, the whole diffuse/specular calculation is done from that new position using the properties of the new material. The result from the secondary ray is used as the incoming light intensityat the first interaction point - the one for the ray from the camera. Note that if we send out rays from the secondary intersection, we can get reflections of reflections. That can get out of hand; the reflection bounce limit puts a stop to this.
As I understand it, the Gather Node does something similar for diffuse surfaces. Here, the direction of the reflected ray is not unique, so the program shoots out a random number of rays in random directions and averages the resulting intensities that come back. Doing that more than once really slows thing down, so don't do that. [It doesn't, I think.]
Ok, that's made everyone's eyes glaze over. I'll go quietly back to rendering. Sorry."
After some further thought, let me phrase it like this - Start with a ray shot from the camera to hit a surface in the scene. It's going to bounce (it has to bounce at least once or this is really boring and simple diffuse or specular) and there are two strategies as to how to aim the next ray. One, which we'll call "raytrace" is used if the direction that the next ray heads off in is known explicitly. This would be the case either if this is the last bounce and it has to head to a light, or if the surface it's coming from is reflective or refractive, in which case the direction angle of the ray is determined by simple optics and the orientation of the surface. The other strategy, which we'll call "indirect illumination" is used if the next direction is not known. In that case, a bundle of rays is shot in random directions and the result of the bundle is the average of all the rays. Note that we can't actually evaluate the results until we know where the secondary rays land, and feed that information backwards along each ray train.
Now we shoot the secondary rays until they hit something (if they escape to space, the answer for that ray is black). At this point, we repeat the algorithm. If we hit something reflective or refractive, we shoot a new "raytrace" ray, whereas otherwise, we make a new bundle of "indirect illumination" rays. This repetition continues until we run out of either "raytrace" or "IDL" bounces along the current ray chain. In order to get any light to go down the ray chain, we have to land on a light as the last step. That's why the last ray in the chain has a known direction (ok, maybe one final ray for each light source) and is a "raytrace" ray. At this point, we can follow the chain back from the light and evaluate the intensity along each ray until we reach the original impact point.
Ok, that covers the Bounce parameters and the Samples parameter. Assuming Intensity is a simple multiplier to effect the balance between bounced and direct light, that leave "Irradiance Sample Size" and the two caches.
Clearly, the complicated tracing of rays get expensive fast. So, rather than do it for every pixel, it is only done at sampled points. The "Irradiance Sample Size" is used to distribute the samples across the rendered image. Say, it set something like how many pixels minimum must be between samples. (It's clearly not that simple as the distribution is concentrated in regions that need finer sampling, but increasing the value does lower the density of red dots, which I'm assuming are at least related to the sample points.) These values are calculated on the first pass and stored in a cache file.
On the second pass, rays are shot from the camera to each pixel (have to do all of them to get the picture!) If the ray hits a reflective or refractive surface, the "raytrace" mechanism is called again and a chain is constructed only of "raytrace" rays until a diffuse or specular suface is hit or the bounce number expires. For the last surface hit, the illumination determined from the first pass is interpolated from the values stored in the cache and used to evaluate the shader at the last point. This tracing has to be done again or the reflections will not be clearly defined. If the camera ray hits a diffuse or specular surface, the illumination is immediately interpolated from the cache and used in the shader at the hit point. That takes care of the "Irradiance Sample Size".
This leaves the two cache values. They are defined to be the number of luminosity points explicitly calculated devided by the number of points interpolated from the cache. This makes a modification of the second pass above. Instead of always interpolating form the cache, a random choice is made between explicit and cached such that the average ratio is the specificed "Irradiance Cache" parameter. For the cached points, interpolation is done; for the others, the whole calculation of Pass 1 is done. The reason for having two parameters is because a reflection is far more sensitive to the eye than a diffuse illumination, so the reflective component might want many more explicit computations than needed elsewhere.
Ok, that is my theory, which is mine and which I call My Theory, and I'm sticking with it for at least the next ten minutes or I think of something better or somebody tells me what the algorithm really is.
Do not take any of the above very seriously - it has to be wrong.
Whichway
Well as they say, "Dazzel em with brilliance or Baffle em with BullSh*t",
Either way I find "Your Theory" rather interesting, as a matter of I could actually visualize what was happening to all those little "Ray Tracers" (I see little space ships flying around ;))
Heck all I really wanted to do was bookmark this thread, but I just had to comment on this post.
Thanks!
S
[quoteShort question - anyone know the units of "Irradiance Sample Size"? Thanks.
Seems likely these are Poser Native Units, but that's a guess.
pjz99 - If it's a metric in the 3D Poser space, that's what I'd guess as well and suggests it should nearly always be << 1, since 1 PNU = 8.6 feet, I think. The other option I've seen is not about Poser, but Lightwave: http://www.except.nl/lightwave/RadiosityGuide96/. Here, the units are 2D pixels on the rendered image. Since the red ant distribution doesn't seem to increase across the screen all that much when I decrease Irradiance Sample Size - at least in my particular model scene - I doubt that pixels is what's really used. Be nice to know for sure, though.
Whichway
The default value is 10, isn't it? It can't be PNU.
Renderosity forum reply notifications are wonky. If I read a follow-up in a thread, but I don't myself reply, then notifications no longer happen AT ALL on that thread. So if I seem to be ignoring a question, that's why. (Updated September 23, 2019)
To be honest, would it really help you to know what units are used there? I don't have the faintest clue as to how the math works for calculating GI, it wouldn't help me.
Sure it would. If it's a 3D metric like PNUs, then I have to scale it when I zoom in on something in the same scene, e.g., go from full body shot to close-up portrait. If it's 2D pixels, then I only have to scale it if I change the resolution of the final picture. [More or less. I'm only talking about a starting point, but expect to still do some fiddling.]
Whichway
Quote - "can't", or "shouldn't be" :)
I mean that with it set to 10, the sample size cannot possibly be 10 PNU separation between samples, or 10 PNU in area per sample. I see much closer sample spacing (based on red dots) and much smaller sample area (based on the splotches that are drawn between red dots) than that.
Sample Size = 10 PNU just has no interpretation that is consistent with what we see.
Renderosity forum reply notifications are wonky. If I read a follow-up in a thread, but I don't myself reply, then notifications no longer happen AT ALL on that thread. So if I seem to be ignoring a question, that's why. (Updated September 23, 2019)
ISS may be proportional to number of calculations per screen pixel for non-IDL fx (gather, reflection, refraction, AO et al.), in which case "Samples" would just be used where somebody wanted to do a realistic render instead. but we'll hafta wait for you-know-who to give the definitive answer, if there's time.
Quote - question - anyone know the units of "Irradiance Sample Size"? Thanks.
Pixels, and it's the maximum sample size - minimum size is currently hardcoded to 1 pixel. It's not to be taken literally, though, as the max sample size is also being affected by overall scene size, and the irradiance cache setting.
The irradiance cache slider in the UI is controlling two separate caches in the renderer: one for indirect diffuse, one for AO. They're one slider in the UI, but they can be changed separately in pz3 files and from Python. There they are called "max error" and go from 1 (worst, fastest) to 0 (best, slowest), where for the indirect light the GUI never sets it to 0. *
The irradiance caching algorithm is well documented, for anyone interested in how the whole thing works, the SIGGRAPH 2008 course explains it and a lot more:
http://www.graphics.cornell.edu/~jaroslav/papers/2008-irradiance_caching_class/
I hope this answers some questions.
*If you're interested to see what comes out when with a max error of 0, open the Python shell and type this:
poser.Scene().CurrentFireFlyOptions().SetGIMaxError(0)
poser.Scene().CurrentFireFlyOptions().SetGINumSamples(16)
Then render.
Brave souls can replace the 16 with higher numbers. MaxError = 0 effectively turns of irradiance caching and the indirect light is being calculated with plain path tracing.
Don't you guys find the render time for GI a bit out of wack?? I did a render last night at 1280x800 and it took about 10 hours........
Doing one right now at 400px and its a good 30+minutes. Maybe I'm expecting too much, but it seems like it should be alittle faster, like 4xs faster for the kind of quality it's putting out.
Why not sign a deal with mental ray?
Yeah, it's a headache. And of course I render on my home business/office computer and need to do stuff while things are rendering. I really really hope Poser Pro 2010 gets this all safely into 64 bit, but then I'll have to upgrade the computer too....
Oh well. If you think this takes a long time, go over to DAZ and find the Bryce forums. Of course, it's pretty much idiosyncratic and outdated, but yikes - they're talking 48 hour renders over there. Mind you, usually pretty complex scenes.
______________
Hardware: AMD Ryzen 9 3900X/MSI MAG570 Tomahawk X570/Zotac Geforce GTX 1650 Super 4GB/32GB OLOy RAM
Software: Windows 10 Professional/Poser Pro 11/Photoshop/Postworkshop 3
Running one right now that has been running for over 2 hours and it is just over half done precalculating the Indirect Lights. Scene is very heavy in details in overhanging trees, bushes and a stream. Never could render in P7 in one pass without a crash. Hope it's done when I get up in the morning.
Higher the quality the longer the time. I'll gladly pay that time price based on the few full scenes I have run thru P8 vs what I was getting in Poser7 and then PoserPro
Gary
"Those who lose themselves in a passion lose less than those who lose their passion"
Hey Wolf 359, Have you sceen the program called Messiah. I have cinema 4D. Messiah has a semi automatic Rigging for people, and I think for animals. It is for pure Animation. You animate with Messiah, the open it up in Cinema or other Big name apps and the animation runs in cinema. It has something called a walker. It is the neatest thing I've sceen. It has flexability that is unbelievable.
I basicly only do animations my self. I don't have the program, but have been thinking seriously of importing the Poser charactures into it and rerigging them. but I'm not sure about the skin textures. this is holding me back. I would be interested in seeing what someone thought about that if your into animation.
Sorry to get off subject, as I am waiting on my Poser 8 to come in that I ordererd.
Quote - The artifacts are real, no doubt. Stefan intends to fix this for SR1. I have a call scheduled to discuss renderer stuff with Uli, Steve, and Stefan Thursday.
can you please tell them to have a seperate option for raytraced bounces and ID bounces?
please? if i will ever have poser 8 and if i will render a car i dont want to have 6 bounces of reflections. thats crazy.
and please SM dont think that poser users are 100% idiots. because this is the feeling i am getting.
'' lets make the options for GI kid-simple so that noone is confused'' .
no matter what they will do someone will always be confused.
The version included in Poser 8 is new, but it carries on what he'd already done for Poser 7/Pro to enable the GI guts that were not exposed to the standard render controls.
Indeed. What's keeping me from making it a guide is I still don't know the way.
I kind of think many changes will be coming in SR1, perhaps too many to bother explaining all the intricacies of it as it is now. For example, in another thread, I've detailed how a single polygon leaks a variable amount of IBL depending on how far away it is, but a pair of polygons do not leak at all. Is this anomaly worth explaining if it is going away shortly?
Renderosity forum reply notifications are wonky. If I read a follow-up in a thread, but I don't myself reply, then notifications no longer happen AT ALL on that thread. So if I seem to be ignoring a question, that's why. (Updated September 23, 2019)
Quote - Just an observation - this "dummies guide to indirect lighting" thread has turned into anything but. :-)
True, but IDL is such a massive improvement and people want to use it now. So if they are like me they are watching and reading and taking lots of notes looking for direction, ideas, and what to do and not to do from people with far more knowledge and experience. Then taking that and testing on their own. I do this even with the knowledge that some of this might go out the window with SR1 or even an SR2. But their is still ideas and concepts being shared that will apply right now and after SR1.
Gary
"Those who lose themselves in a passion lose less than those who lose their passion"
Quote - But their is still ideas and concepts being shared that will apply right now and after SR1.
The trick is to figure out what's going to be relevant in a few weeks' time. As outlined by BB above and elsewhere, there are a number of anomalies with the renderer, a few glitches (e.g. the dark spots on some surefaces), and the grindingly slow computations when transmapped hair is encountered.
I think that BB was questioning the value of attempting to find workarounds for these issues when the guys that can really sort them out are probably working on them right now. We'll have to wait for SR1 to see what's been sorted. I suspect other things will crop up thereafter.
One thing you can do to speed up render time is to use depth-mapped, as opposed to ray-traced shadows on infinite lights (haven't checked it on other types yet). I am baffled as to why there should be such a huge difference between them, but who knows? The difference may not be as meaningful after SR1.
Windows 10 x64 Pro - Intel Xeon E5450 @ 3.00GHz (x2)
PoserPro 11 - Units: Metres
Adobe CC 2017
This site uses cookies to deliver the best experience. Our own cookies make user accounts and other features possible. Third-party cookies are used to display relevant ads and to analyze how Renderosity is used. By using our site, you acknowledge that you have read and understood our Terms of Service, including our Cookie Policy and our Privacy Policy.
Yes that is "adequate". Render time sucks, but it looks great, actually.
Sample size and number of samples got you there, as I thought it would. I didn't have the patience to run that test before dinner. :)
Renderosity forum reply notifications are wonky. If I read a follow-up in a thread, but I don't myself reply, then notifications no longer happen AT ALL on that thread. So if I seem to be ignoring a question, that's why. (Updated September 23, 2019)