Wed, Feb 12, 11:58 PM CST

Renderosity Forums / Poser - OFFICIAL



Welcome to the Poser - OFFICIAL Forum

Forum Coordinators: RedPhantom

Poser - OFFICIAL F.A.Q (Last Updated: 2025 Feb 11 3:50 am)



Subject: A Dummies Guide to Indirect Lighting in Poser 8


grichter ( ) posted Wed, 12 August 2009 at 6:06 PM

Quote - > Quote - But their is still ideas and concepts being shared that will apply right now and after SR1.

The trick is to figure out what's going to be relevant in a few weeks' time. As outlined by BB above and elsewhere, there are a number of anomalies with the renderer, a few glitches (e.g. the dark spots on some surefaces), and the grindingly slow computations when transmapped hair is encountered.

true but I know-see pjz99 from the C4D forum. C4D has had GI for some time. Paul knows more about what it should be doing (that is the part I am after). I understand that Poser 8 might not do what it is supposed to even after SR1. I am not currently looking for set this a 2 and that at 20 to get the best results. That comes later and with more experience from everybody with IDL.

Gary

"Those who lose themselves in a passion lose less than those who lose their passion"


mobelgod ( ) posted Wed, 12 August 2009 at 6:13 PM

When I Render from the D3D script, the GI pass is skipped. How do you enable it? Does the .py file require editing?


bagginsbill ( ) posted Wed, 12 August 2009 at 6:25 PM

Quote - When I Render from the D3D script, the GI pass is skipped. How do you enable it? Does the .py file require editing?

Enable it in regular render settings before using the D3D dialog.


Renderosity forum reply notifications are wonky. If I read a follow-up in a thread, but I don't myself reply, then notifications no longer happen AT ALL on that thread. So if I seem to be ignoring a question, that's why. (Updated September 23, 2019)


scarlock ( ) posted Wed, 12 August 2009 at 6:31 PM

No - you need to turn on GI in the main Render Settings dialog.


Whichway ( ) posted Wed, 12 August 2009 at 6:42 PM · edited Wed, 12 August 2009 at 6:54 PM

file_436783.jpg

Would our eagle-eyed render gods give me a reading on this render, please? It looks pretty good to me, but I have too little experience to know what to look for. Thanks.

pjz99's Simple GI Test 2 setup with modified render parameters.
2min 10 sec render on my dual core laptop.

Whichway


mobelgod ( ) posted Wed, 12 August 2009 at 6:44 PM

Worked like a charm. Thx BB.


bagginsbill ( ) posted Wed, 12 August 2009 at 7:13 PM

Quote - Would our eagle-eyed render gods give me a reading on this render, please? It looks pretty good to me, but I have too little experience to know what to look for. Thanks.

pjz99's Simple GI Test 2 setup with modified render parameters.
2min 10 sec render on my dual core laptop.

Whichway

Light quality looks great to me - very smooth - can't see splotches.
Occlusion shadow quality overall looks great.

Occlusion shadow on the large square from the small square looks like not an accurate shape. That's where we have to reduce sample size. (I think)

Are you going to tell us the settings? :)


Renderosity forum reply notifications are wonky. If I read a follow-up in a thread, but I don't myself reply, then notifications no longer happen AT ALL on that thread. So if I seem to be ignoring a question, that's why. (Updated September 23, 2019)


LostinSpaceman ( ) posted Wed, 12 August 2009 at 7:21 PM

Quote - Just an observation - this "dummies guide to indirect lighting" thread has turned into anything but. :-)

Frankly I bowed out of the thread after being told to go read other threads if I wanted any answers.


Whichway ( ) posted Wed, 12 August 2009 at 7:24 PM

file_436791.jpg

Here are the settings. I *almost* have a coherent sounding story to explain them, but it's failing on the last parameter - Indirect Light Irradiance Cache. That's not behaving as I want to predict and isn't even very stable. More cogitation required....

Whichway


Whichway ( ) posted Wed, 12 August 2009 at 7:28 PM

LostInSpaceMan - I think part of the problem is that even the non-dummies (me, I'm definitely in the dummy class) are having a hard time still. Difficult to distill it down in that case. :sad:

Whichway


bagginsbill ( ) posted Wed, 12 August 2009 at 8:08 PM

Quote - Here are the settings. I almost have a coherent sounding story to explain them, but it's failing on the last parameter - Indirect Light Irradiance Cache. That's not behaving as I want to predict and isn't even very stable. More cogitation required....

Whichway

IC=200?!? OK now I must start over. According to my understanding of what Stefan said, that means you want 200% of the samples to be calculated instead of interpolated. Is this a way to force twice as many samples, and none from the cache?!


Renderosity forum reply notifications are wonky. If I read a follow-up in a thread, but I don't myself reply, then notifications no longer happen AT ALL on that thread. So if I seem to be ignoring a question, that's why. (Updated September 23, 2019)


stewer ( ) posted Wed, 12 August 2009 at 8:13 PM

The one set to 200 is the occlusion cache, which doesn't get used at all in this case - that one's for AO only (and internally, Poser clamps it to a max of 100 anyway).

The deciding factor here is the second IC parameter. Basically, what these settings do is calculate a few, high quality shading samples (4096 paths per sample) and then generously interpolates between them. Because the indirect light in this scene is very smooth, this works beautifully.


LostinSpaceman ( ) posted Wed, 12 August 2009 at 8:15 PM · edited Wed, 12 August 2009 at 8:16 PM

Quote - LostInSpaceMan - I think part of the problem is that even the non-dummies (me, I'm definitely in the dummy class) are having a hard time still. Difficult to distill it down in that case. :sad:

Whichway

Be that as it may, when I get an answer telling me to go read twelve or more other threads to find the answer, it's no longer worth my time to participate. If the answer is unknown, just say, Hey, I don't know it. Don't say go read all these other 20+ page threads and you'll find the answer in there somewhere, which is basically the answer that sent me packing.


Whichway ( ) posted Wed, 12 August 2009 at 8:38 PM · edited Wed, 12 August 2009 at 8:40 PM

Quote -   > Quote - question - anyone know the units of "Irradiance Sample Size"? Thanks.

Pixels, and it's the maximum sample size - minimum size is currently hardcoded to 1 pixel. It's not to be taken literally, though, as the max sample size is also being affected by overall scene size, and the irradiance cache setting.
The irradiance cache slider in the UI is controlling two separate caches in the renderer: one for indirect diffuse, one for AO. They're one slider in the UI, but they can be changed separately in pz3 files and from Python. There they are called "max error" and go from 1 (worst, fastest) to 0 (best, slowest), where for the indirect light the GUI never sets it to 0. *

The irradiance caching algorithm is well documented, for anyone interested in how the whole thing works, the SIGGRAPH 2008 course explains it and a lot more:
http://www.graphics.cornell.edu/~jaroslav/papers/2008-irradiance_caching_class/

I hope this answers some questions.

*If you're interested to see what comes out when with a max error of 0, open the Python shell and type this:
poser.Scene().CurrentFireFlyOptions().SetGIMaxError(0)
poser.Scene().CurrentFireFlyOptions().SetGINumSamples(16)
Then render.
Brave souls can replace the 16 with higher numbers. MaxError = 0 effectively turns of irradiance caching and the indirect light is being calculated with plain path tracing.

Stefen was quoting the manual in the post you are remembering. I'm not sure the manual is fully accurate, to be diplomatic. I've found the above post along with the reference far more useful. [Note: I think he confirms above pjz99's observation that "Irradiance Sample Size" is bounded from below by 1.0.] Associating cache sliders with "max error" is the key. To make it make sense, I think you have to take the reciprocal of the slider value. That then is the relative error limit for how far off the interpolated irradiance value can be without trigging a new explicit calculation. IC=200 then means 0.5% interpolation accuracy. My story, in progress, takes 0.5% accuracy as the goal since we ultimately have only 256 possible brightness values in the final image. To help with playing around, I will post what I believe is the lowest quality it is possible to achieve and you can play with one thing at a time. Note that certain combinations of values can effectively override other parameters; I think this is what's happening with my last parameter - others have already required things to be better than it's calling for.

Whichway

(Oops, didn't know the god was watching.)


pjz99 ( ) posted Wed, 12 August 2009 at 8:46 PM · edited Wed, 12 August 2009 at 8:52 PM

Quote - Would our eagle-eyed render gods give me a reading on this render, please? It looks pretty good to me, but I have too little experience to know what to look for. Thanks.

Sample radius of 5 is saving you a lot of computation time, which is good, but imo the large radius is showing.  I think you should have more sharpness in the occlusion shadow where the corners of the small hovering square are nearest the larger square overhead, and a very large sample radius won't give you that.  It's smoother though, that's a good result otherwise.

edit: you're also not getting any obvious brightness reflected from the sphere onto the large square over it.

My Freebies


pjz99 ( ) posted Wed, 12 August 2009 at 8:50 PM · edited Wed, 12 August 2009 at 8:57 PM

Quote - Note: I think he confirms above pjz99's observation that "Irradiance Sample Size" is bounded from below by 1.0.

That was a guess, but it's a pretty safe bet - and if it doesn't have a forced minimum of greater than zero it really ought to, as a radius of zero pixels would mean infinite computation.

edit: I don't think 1.0 is the actual internal minimum though, I saw a difference in render time and quality by forcing it to be smaller.  I think the minimum is between 0.1 and 0.001 (would be silly to actually use 0.001 now that I understand what it means though).

My Freebies


Whichway ( ) posted Wed, 12 August 2009 at 8:52 PM

file_436795.jpg

I know I'm crusing for a smiting, but I'm pretty sure that the setting of the first Irradiance Cache strongly effected the spacing of the red dots. I don't think that should be happening if that cache is being ignored. But I have to redo my path to these setting to be sure.

The 4096 is also about 0.5% when used as a number of random samples. My problem has been with the second IC value; it should mean a really bad allowable error. But looking at the red dots that the other parameters cause, I think it's just that the other parameters force enough samples anyway the cache max error is always satisfied until the requirement gets silly.

Whichway

[Base, low quality parameters posted above.]


pjz99 ( ) posted Wed, 12 August 2009 at 8:53 PM

Quote - I know I'm crusing for a smiting, but I'm pretty sure that the setting of the first Irradiance Cache strongly effected the spacing of the red dots.

Nah I bet you it was sample radius.

My Freebies


Whichway ( ) posted Wed, 12 August 2009 at 8:59 PM · edited Wed, 12 August 2009 at 9:02 PM

That's why I have to do the derivation over again. I'm not fully certain and it's certainly important. And the order of application matters since there are several parameters whose ultimate effect is just to effect to locations of evaluation points. I hope to include useful snapshots of my progress along the way so others can repeat the experiments.

Whichway


pjz99 ( ) posted Wed, 12 August 2009 at 9:10 PM

Never mind what I said about the sphere bouncing some brightness back up to the square over it, the previous examples didn't have that either.  Although, looking at how the scene is constructed I'm a little surprised actually, but I guess it's just a bit too far to provoke that effect.

My Freebies


Whichway ( ) posted Wed, 12 August 2009 at 9:19 PM

file_436797.jpg

For reference, here is the render with the only change being Irradiance Sample Size reduced from 5.0 to 1.8. Render time went from ~2min to ~8min. On the PNG original, at least, there now seem to be splotches on the large card.

Whichway


grichter ( ) posted Wed, 12 August 2009 at 9:20 PM

Hold the phone for a second. I seem to remember pick two raytracing-poser-smooth polys when using IDL. Even remember a comment in the docs about that. Yet Whichway used all three!

Gary

"Those who lose themselves in a passion lose less than those who lose their passion"


Whichway ( ) posted Wed, 12 August 2009 at 9:25 PM

Yeah, there's something there confusing me as well. Polygon Smoothing is on generally and is certainly in effect for the spheres; corners are very obvious with it turned off. However, Polygon Smoothing is turned off explicitly for the backdrop; otherwise, there are what look like AO artifacts across the floor-backdrop transition. I haven't followed the threads on that effect very carefully, being a little occupied.

Whichway


bagginsbill ( ) posted Wed, 12 August 2009 at 9:44 PM · edited Wed, 12 August 2009 at 9:44 PM

Polygon smoothing only creates a problem if the result of smoothing is a concave polygon or saddle shaped curved polygon. Purely convex (bulging) polygons are ok. The sphere is purely convex. The backdrop is concave. So you had to turn it off for one but not the other.

My warning of pick 2 was a generalization. Specific props can work. No human figure will.


Renderosity forum reply notifications are wonky. If I read a follow-up in a thread, but I don't myself reply, then notifications no longer happen AT ALL on that thread. So if I seem to be ignoring a question, that's why. (Updated September 23, 2019)


pjz99 ( ) posted Wed, 12 August 2009 at 9:46 PM · edited Wed, 12 August 2009 at 9:46 PM

From what I've experienced, the problems with polygon smoothing and the indirect lighting artifacts pops up where you have concave geometry facing the camera (you can see it in that accoustic foam-lookin sheet I rendered earlier, and also in the backdrop prop).  Convex geometry doesn't seem to have problems.  I have no idea why.

edit: heh ^^ yes.

My Freebies


bagginsbill ( ) posted Wed, 12 August 2009 at 9:51 PM

Quote - For reference, here is the render with the only change being Irradiance Sample Size reduced from 5.0 to 1.8. Render time went from ~2min to ~8min. On the PNG original, at least, there now seem to be splotches on the large card.

Whichway

This was my experience too. I tried a lot of things, but I always hit a fundamental tradeoff. If the shadows were properly detailed, the smoothness of the liit areas was lost. If the lit areas were smooth, the shadows were blobby.

In fact, we can get VERY smooth lighting gradients by choosing enormous sample size. Think about it - if the whole card was sampled at only the corners, then by definition, the entire card would be smooth and splotch-free because it would all be interpolated.


Renderosity forum reply notifications are wonky. If I read a follow-up in a thread, but I don't myself reply, then notifications no longer happen AT ALL on that thread. So if I seem to be ignoring a question, that's why. (Updated September 23, 2019)


bagginsbill ( ) posted Wed, 12 August 2009 at 9:53 PM · edited Wed, 12 August 2009 at 9:55 PM

Quote - From what I've experienced, the problems with polygon smoothing and the indirect lighting artifacts pops up where you have concave geometry facing the camera (you can see it in that accoustic foam-lookin sheet I rendered earlier, and also in the backdrop prop).  Convex geometry doesn't seem to have problems.  I have no idea why.

edit: heh ^^ yes.

I know why, I think. Remember that shadows see the original un-curved polygon, while what is being rendered is a displaced point on the polygon. If the displacement is negative, as happens only with concavity, then the point being rendered is UNDER the original un-bent polygon. Thus, it shadows itself.

An interesting experiment would be to apply a uniform positive displacement in addition to the smoothing. If the uniform displacement was larger than any smoothing-induced negative dispalcement, then the displaced surface will always be "above" its original position. Self-shadowing should not happen in that case.


Renderosity forum reply notifications are wonky. If I read a follow-up in a thread, but I don't myself reply, then notifications no longer happen AT ALL on that thread. So if I seem to be ignoring a question, that's why. (Updated September 23, 2019)


Whichway ( ) posted Thu, 13 August 2009 at 4:08 AM

file_436812.jpg

Ok, this one has gone somewhat past my point of imaginable render times at 01:43:10.132. I was trying for good occlusion geometry without blotching and got it in most places, except at the edges of the small card's shadow. The gap between the cards looks pretty good, though, I think.

I also tinted the sphere to verify for piz99 that light from the large sphere does reach the large card.

Whichway


Whichway ( ) posted Thu, 13 August 2009 at 4:20 AM

file_436813.jpg

Settings for my previous post. **Stewer** was of course right that the first Irradiance Cache parameter has no effect with IDL on, so I put it down to zero. **piz99** was right that it really was the Irradiance Sample Size that mostly controlled the distribution of red dot evaluation points. I raised the second Irradiance Cache slider about as far as it would sensibly go - somewhere beyond 90 the renderer starts acting like the room normals were reversed again and **Stewer** says Poser won't take anything past 100 anyway. I set this high to crowd evaluation points around the gap between cards. That worked, but the blotchiness got really bad. I raised the Sample parameter by a factor of 16, hoping for a "noise" reduction of maybe 4. Believe it or not, the blotchiness here is quite a bit better than it was otherwise, but that cost drastically in render time. Any other critiques on quality appreciated. Also, I'm very new at this stuff and wonder if there are any pros out there that could offer an opinion as to whether this is an easy scene or a hard scene to render with Global Illumination. Thanks.

Whichway


Whichway ( ) posted Thu, 13 August 2009 at 4:44 AM

file_436814.jpg

One other quick point and then to bed. What BB just described about the concave raytracing sounds a lot like a job for Shadow Min Bias on the light source to me so I gave it a try. I know the manual makes it sound as though Shadow Blur Radius and Shadow Min Bias only work for DM shadows, but it's not true as I discovered some time ago. So, both images here have Smooth Polygons turned back on for the backdrop, and, of course, for the whole render. The one in this post with artifacts has Shadow Min Bias set as delivered from **pjz99** at 0.1 while the other post has it jacked up to 1.0. Looks clean to me.

Whichway


Whichway ( ) posted Thu, 13 August 2009 at 4:45 AM

file_436815.jpg

Shadow Min Bias = 1.0


Whichway ( ) posted Thu, 13 August 2009 at 5:09 AM

bagginsbill said:

Quote - In fact, we can get VERY smooth lighting gradients by choosing enormous sample size. Think about it - if the whole card was sampled at only the corners, then by definition, the entire card would be smooth and splotch-free because it would all be interpolated.

Exactly! And that is precisely what we should be trying to do - go for the largest Irradiance Sample Size we can manage. (In the SIGGRAPH course Stewer pointed to, this seems to be called the Neighborhood Clamp Radius.) This scene is so dominated by really smooth gradients that large ISS should work very well indeed. I'm trying a larger ISS now and it is certainly going faster; quality yet to be determined. Ah, who needs sleep.

Whichway


pjz99 ( ) posted Thu, 13 August 2009 at 6:37 AM

Quote - (smooth polygons and raytracing artifacts)I know why, I think. Remember that shadows see the original un-curved polygon, while what is being rendered is a displaced point on the polygon. If the displacement is negative, as happens only with concavity, then the point being rendered is UNDER the original un-bent polygon. Thus, it shadows itself.

An interesting experiment would be to apply a uniform positive displacement in addition to the smoothing. If the uniform displacement was larger than any smoothing-induced negative dispalcement, then the displaced surface will always be "above" its original position. Self-shadowing should not happen in that case.

That sounds very reasonable.  This is troubling.  It seems to me that with character figures, this may cause some problems as you observed earlier, but with conforming clothing models that contain a lot of defailed convoluted geometry, it may cause many more.  In any case I'm going to try some clothing tests with the fantasy suit model I've been working on with Whichway's approach of greatly increased sample radius to see if they come out well.  This may be a good compromise between speed and freedom from artifacts, and a pretty agreeable render time.

Whichway your render with a colored sphere was a good test, thanks for that.

My Freebies


pjz99 ( ) posted Thu, 13 August 2009 at 6:43 AM

Quote - Shadow Min Bias = 1.0

This is OK for the test scene with a sphere and a couple of planes, but when you introduce very small geometry, you'll find that a high shadow min bias means that the small geometry will not cast shadows - which is probably why the smoothed polygons in the curved part of the backdrop aren't showing artifacts, they aren't casting a shadow.

My Freebies


pjz99 ( ) posted Thu, 13 August 2009 at 7:17 AM · edited Thu, 13 August 2009 at 7:23 AM

Content Advisory! This message contains nudity

file_436816.jpg

This is the outfit test scene I've been showing that had been provoking a lot of artifacts.  This time I rendered it with Irradiance Sample Size at 20.  I can only see one major artifact, at the thigh, and maybe one other at the breast.  This is a lot better than what I was getting with a small sample radius as I was assuming (foolishly, again) that a small sample radius would automatically mean higher quality.  The other two Indirect Light quality sliders (on Dimension3D's interface) are about at middle values, and GI bounces is set to 8.  Seven minutes 25 seconds.  Pretty good, good enough for me.

edit: I conclude that the biggest single thing you can do to avoid blotch artifacts will be to make Irradiance Sample Size larger, at the expense of some precision of how GI lighting is applied.  I don't think you'd want to always enlarge it to 20, but try for some compromise between low and high value that will give you "pretty good" indirect lighting and avoid artifacts.

My Freebies


Believable3D ( ) posted Thu, 13 August 2009 at 9:28 AM

Could I just get a point of clarification again? Is Irradiance Sample Size higher quality toward the right with, higher numbers, or lower quality?

______________

Hardware: AMD Ryzen 9 3900X/MSI MAG570 Tomahawk X570/Zotac Geforce GTX 1650 Super 4GB/32GB OLOy RAM

Software: Windows 10 Professional/Poser Pro 11/Photoshop/Postworkshop 3


pjz99 ( ) posted Thu, 13 August 2009 at 9:50 AM · edited Thu, 13 August 2009 at 9:51 AM

"Quality" is probably not a good term to use here, at least for now.  Smaller Irradiance Sample Size means that, when a "gob" (a silly term but I think it's fair) of Indirect Lighting illumination is painted over part of an image, it will be a small "gob".  A large value will give you large "gobs".  The gobs of illumination are averaged together.  Large gobs overlap each other a lot, both in area and in the number of overlaps, meaning the averaging will tend to be very smooth, but less accurate (e.g. try painting a picture with a 500 pixel brush in an image editor).  Small gobs only overlap each other a small amount, and less often, meaning that the averaging will be more accurate (which is good) but also more prone to showing an artifact where a difference in brightness between neighboring gobs is too great (which is not good).

This may be a completely silly analogy but I think it's in the neighborhood.  I feel that "Smooth and less accurate" is better than "Full of artifacts and very accurate".  It would be great to have "Smooth and very accurate" but I think that will have to wait for SR1 or later.

My Freebies


pjz99 ( ) posted Thu, 13 August 2009 at 9:59 AM

You know what, I wish that Poser's raytraced shadows had a similar option when dealing with blurred shadows (sample radius).

My Freebies


Believable3D ( ) posted Thu, 13 August 2009 at 10:03 AM

Thanks, PJZ. By "quality," I meant something like "intended accuracy." So a smaller number ought to be better quality by that definition, but artifacts make the render worse in actuality. That's how I'm interpreting your response. Right?

______________

Hardware: AMD Ryzen 9 3900X/MSI MAG570 Tomahawk X570/Zotac Geforce GTX 1650 Super 4GB/32GB OLOy RAM

Software: Windows 10 Professional/Poser Pro 11/Photoshop/Postworkshop 3


pjz99 ( ) posted Thu, 13 August 2009 at 10:07 AM

I think that's fair, yeah.  It doesn't matter in all cases though - it's pretty obvious in the test scene with the sphere and hovering squares, but in the outfit I'm showing just now I don't think it makes a difference anywhere you can see.

My Freebies


Whichway ( ) posted Thu, 13 August 2009 at 12:47 PM

Quote - > Quote - Shadow Min Bias = 1.0

This is OK for the test scene with a sphere and a couple of planes, but when you introduce very small geometry, you'll find that a high shadow min bias means that the small geometry will not cast shadows - which is probably why the smoothed polygons in the curved part of the backdrop aren't showing artifacts, they aren't casting a shadow.

Yes, I know the problem. I discovered (actually was told, I think) that Shadow Min Bias worked with raytraced shadows while I was getting a good result with a displacement mapped version of body hair. That needed a very small value of the bias to eliminate tiny polygonal self-shadowing. My point mostly was that Shadow Min Bias does effectively what BB was suggesting and confirms his analysis of the problem. Any individual render will vary.

Whichway


Whichway ( ) posted Thu, 13 August 2009 at 2:49 PM · edited Thu, 13 August 2009 at 3:00 PM

Quote - Thanks, PJZ. By "quality," I meant something like "intended accuracy." So a smaller number ought to be better quality by that definition, but artifacts make the render worse in actuality. That's how I'm interpreting your response. Right?

Actually, the conclusion I'm coming to is that the second Irradiance Cache slider is really the "intended accuracy" contol in a very direct manner - high quality to the right. (May be a problem somewhere above 90 and this bit needs some investigation.) From Stewer's comment earlier, it maps to "max error" inside the program. I believe that it is the reciprocal of ISS that is actually the max error. Hence, larger number are higher accuracy.

The magic of the Irradiance Cache algorithm works something like this: Shoot out a ray from the camera until it hits something. From that point in the scene, start sending out IDL Samples rays in random directions. Start with the first ray and follow it until it hits something, meanwhile remembering that we still have to do all the others. We'll eventually need the total light intensitiy at each of these points, but they are only potential evaluation points so far. Keep this up until we've run out of bounces on that first ray chain. This is going to be our first evaluation point. We send out one more bundle of IDL Samples rays. For these, when they hit, we trace back to the light sources, take the source light intensities and multiply them by the Diffuse color/value at each final ray's hit point, then average all the results from all the rays shot from the evaluation point. That gives us an estimate of the total amount of light arriving at the evaluation point, a.k.a. the irradiance at the evaluation point. Note that this evaluation point can be quite far from the point where the original camera ray hit.

But, wait! There's more! Since we have a bunch of samples in different directions at the evaluation point, we can weasle some more information out of them besides just the irradiance. Hairy math occurs at this point, but what comes out are numbers that allow us to calculate approximately what the irradiance is at points in the scene near to the evaluation point and, even better, how good that approximation is as a function of distance from the evaluation point. We store these numbers along with the location of the evaluation point and the irradiance in such a way that we can find them quickly given a nearby point in space. (Major Computer Science magic in that one.)

Whew. All that for the first ray shot from the last bounce from the first camera ray; only a bazillion more to go. It doesn't look promising on the render time front. But let's plow ahead. Let's assume, just so I'm not here all day, that the next final bounce lands pretty close to the evaluation point we already have recorded. We use the error estimator information from the evaluation point and the distance to the new, potential evaluation point so see if the estimate is good enough. The "max error" parameter is the definition of "good enough". If the estimate is good enough, we just use that and avoid sending out the last IDL Samples set of rays. If it is not good enough, we change the current point from a potential evaluation point to a real evaluation point, send out the last rays and record the new evaluation point's irradiance, approximation function and error estimate parameters.

If you think about that for awhile, taking breaks to let your brain cool down as appropriate, you'll see that in the early stages, nearly all potential evaluation points get turned into real evaluation points. But, despite my assumption in the paragraph above, these evaluation points typically get spread throughout the scene space and each of them carries a little ball inside of which it knows what the irradiance is. Eventually, and given the number of points we have to check this is certain to happen fairly early on, potential evaluation points start landing in these little balls and we get a fast answer.

Now, there is an even more subtle coolness that happens and you can see it best by watching where the little red dots apear. (Each red dot is a back projection into the camera of where a real evaluation point is.) It turns out that the error estimator at each evaluation point depends on the rate of change of the local geometry near that point; fast change means the error goes up fast with distance from the evaluation point. That means that in that neighborhood, more potential evaluation points will be converted to real evaluation points. You see that as the red dots crowding in around corners and the like where the geometry changes quickly.

I recommend playing with just the Samples and second Irradiance Sample Size for a while to get a feel for this.

Now, about the Irradiance Smaple Size. This is a heuristic (read "hack") to deal with the occasional cases where the complicated algorithm above goes wrong. Because the evaluation point error estimator is determined from random samples, sometimes you miss a critical sample that reveals an important change in local geometry. In that case, the error estmator says things are good at a given distance away when they are not. This leads to artifacts similar to "light leaks" - little places where the irradiance is clearly wrong. The hack is to apply an artificial upper bound on how far from an evaluation point any estimator is allowed to be valid. This adds more real evaluation points, but it does it everywhere, including places where the regular estimator would be perfectly fine. You can see this as a mesh of roughly evenly spaced red dots appearing when you decrease the Irradiance Sample Size. Most of the time, this is throwing away perfectly good cached values and your render time skyrockets. Personally, I'd try raising Samples significantly first to see if I can hit that critical geometry before lowering Irradiance Sample Size.

Summary: Keep Irradiance Sample Size as large as you can and apply it last. The second Irradiance Cache slider is the real quality parameter and your render-time friend. Set it as high as necessary, but no higher. Don't be too afraid of a lot of Samples; they are not used nearly as often as you might think and, since they're random, things get better only with the square root of Samples. 

Let's try that for a while and see how we do.

Whichway


Whichway ( ) posted Thu, 13 August 2009 at 4:46 PM · edited Thu, 13 August 2009 at 4:56 PM

file_436854.jpg

Getting there, I think. Render time 00:16:50.311 with good occlusion geometry and low blotchiness.

Whichway


Whichway ( ) posted Thu, 13 August 2009 at 4:46 PM

file_436855.jpg

Settings for same.


Whichway ( ) posted Thu, 13 August 2009 at 5:01 PM · edited Thu, 13 August 2009 at 5:07 PM

BTW, I've confirmed that the second Irradiance Cache is the reciprocal of max error by setting max error explicitly to 0.025 (2.5%) via Python, as Stewer described, and the slider appeared set to 97.5.

Oh, phutz, I can't calculate this morning - too little sleep. Anyway, 0.025 does map to 97.5. Somebody awake please work out the formula.

Whichway


momodot ( ) posted Thu, 13 August 2009 at 5:17 PM · edited Thu, 13 August 2009 at 5:26 PM

file_436858.jpg

Above posts are pretty heavy stuff for someone like me...

For a dummy like me... here is a set up lit and rendered as I would in P7 and the same set-up in P8 with IDL. Any advise on how to tackle figuring out a basic render like this?

In the IDL renders it look likes my surfaces are emitting light but I have 0.0 ambient and a 50% gray in the diffuse.

Is there a simple way to just "turn down" the IDL effect a few notches?

My tests are telling me I need to give up the old six and eight light sets I have been using and to work with only one to three lights but I am also getting the feeling there is some sort of interaction between RayTrace Bounce settings and Indirect Light Sample settings... I haven't worked it out :)



momodot ( ) posted Thu, 13 August 2009 at 5:28 PM · edited Thu, 13 August 2009 at 5:31 PM

file_436859.png

This test took much more resources than it ought to have. One spot light in the scene. Raytrace=3 bounces IDL Quality=9 to get rid of artifacts Pixel Sample=6



Whichway ( ) posted Thu, 13 August 2009 at 5:28 PM

{Duh. Wake up, wake up! It's not even morning anymore. Have some coffee!}

Ok.

Max Error in % = 100 - second Irradiance Cache setting.

Implies IC=100 turns off caching and every point is a real evaluation point, hence the carpet of red dots.

Whichway


Whichway ( ) posted Thu, 13 August 2009 at 5:33 PM · edited Thu, 13 August 2009 at 5:35 PM

Momodot - Do you know about the Dimension 3D "Render Firefly.py" script mentioned several times earlier in this thread? If so, could you take a screen shot of the panel of parameters you are using so we can have a look? Thanks. And what does "too many resources" actually mean? Render time or what?

Whichway


momodot ( ) posted Thu, 13 August 2009 at 5:39 PM · edited Thu, 13 August 2009 at 5:41 PM

file_436862.png

Hi.. sorry to be unclear... I'll check that script out. I meant I think raytrace/idl/pixel sample setting are double what they should be for a simple render and run slow. Here the settings are at half but with another light added... I need to tone down these lights as this looks pretty bright for a 50% gray surface. Maybe all lights should total 100% white at 100% intensity and not go over that... I'll try cutting the lights so they total "1.0".



Privacy Notice

This site uses cookies to deliver the best experience. Our own cookies make user accounts and other features possible. Third-party cookies are used to display relevant ads and to analyze how Renderosity is used. By using our site, you acknowledge that you have read and understood our Terms of Service, including our Cookie Policy and our Privacy Policy.