Forum: Poser - OFFICIAL


Subject: Lighting Question

Nebula opened this issue on Dec 13, 2006 · 47 posts


CaptainJack1 posted Thu, 14 December 2006 at 11:30 AM

Quote - But I guess I expected to see a white spot rendered.

 

The reason is that a "light source" in Poser (or any 3D rendering program) isn't an actual object, like a light bulb or the sun. It's more of a reference point for coloring objects that are in the scene.

WARNING: GEEKY, MATH-TYPE STUFF AHEAD.

Everything in 3D rendering is about numbers representing points in space, and colors at those points. I'm going to use ray tracing as an example, but the concept is similar in all kinds of rendering. The important elements of our scene are the Camera, the Render Window, the Object that we're trying to make a picture of, and, finally, the Light Source. Each of these have many different attributes, but for this example, we'll concern ourselves with just a few.

The Camera is a fixed point in space (X, Y, & Z coordinates), and points in some direction. In this example, it's pointing in the general direction of the Object. The Render Window is an imaginary rectangle between the Camera and the Object. It represents the actual image that we're going to end up with. The Object itself consists of many connected polygons (usually triangles and squares), each of which is represented as an X, Y, Z point in space (called a Vertex). Each polygon in the object has a color value, four numbers representing it's red value, green value, blue value, and transparency, respectively. Commonly, this color comes from a texture map that has been applied to the object, but may come from a procedural shader, too. Finally, there is the Light Source, which is also a fixed point in space. It, too, has a color value.

For each pixel in the final image, the rendering engine (for example, FireFly in Poser) is going to "shoot a ray" at it. The word ray, in this case, is used in the mathematical sense of "half a line". What this means is, the program is going to mathematically calculate a ray beginning at the Camera (A), and intersecting a point in space that represents the pixel being calculated in the Render Window (at the X). The program will continue drawing the ray to see if it intersects any of the the 3D objects in the scene. When it does (at B, in this case), the program draws a second ray from that contact point (B) to the Light Source (C). If the second ray intersects a light source, the program uses the light's color to mix with the color of the Object at the intersection point, modified by the angle that the second ray makes with the object's polygon, to determine what actual color appears at that point. That color is then written out for the pixel, and the engine goes to the next pixel and starts again, until the image is finished.

If the first ray never hits an object, light sources never come into play. Instead, the program draws a background color in the pixel. If the second ray hits another object instead of a light source, the pixel shows a dark color, since it's in shadow. So, pointing a light source at the camera doesn't do anything; since the camera never sees an object, it never looks at the light source.

In an actual scene, many, many factors modify this process. These factos can include transparency (which will cause yet another ray to be drawn, reflection (which will also cause more rays to be drawn), fog, light intensity & spread, ambient color, and on and on. But that's the basics.

 

=== END OF GEEKY STUFF ===

Don't know if that's of interest, but maybe it helps. 😄