Mon, Nov 11, 12:56 AM CST

Renderosity Forums / Poser - OFFICIAL



Welcome to the Poser - OFFICIAL Forum

Forum Coordinators: RedPhantom

Poser - OFFICIAL F.A.Q (Last Updated: 2024 Nov 10 11:00 pm)



Subject: What is HDRI


kawecki ( ) posted Mon, 18 December 2006 at 12:36 AM · edited Sun, 10 November 2024 at 5:10 AM

I am confused about HDRI, what do people mean with this term.
More and more this term is mentioned and Poser 7 I heard that has HDRI.
What I know is that there is a limitation to 24 bit RGB colors, 8 bits per color is enough for the definition, was is not enough is the dynamic range. The variation of illumination is much bigger than 8 bits. If a part of the scene or picture is much more bright than the rest  this part appears saturated , bad looking and with white color.
To overcome this problem the 24 bit colors were extended using floating point instead of 8 bits and is supported by the tiff format.
The question that I have is, even you have textures and images of hi-range, the renderer is able to handle bigger resolution than 8 bits, what's the use if the final result is a jpeg image that is limited to 8 bits, or if you save as tiff or 16 bits pngs you will end viewing the image in your monitor that is limited to 8 bits. All efforts done for nothing!!!

Stupidity also evolves!


muralist ( ) posted Mon, 18 December 2006 at 12:46 AM · edited Mon, 18 December 2006 at 12:47 AM

Attached Link: http://www.debevec.org/

Start at http://www.debevec.org/


tekmonk ( ) posted Mon, 18 December 2006 at 1:05 AM

Quote - The question that I have is, even you have textures and images of hi-range, the renderer is able to handle bigger resolution than 8 bits, what's the use if the final result is a jpeg image that is limited to 8 bits, or if you save as tiff or 16 bits pngs you will end viewing the image in your monitor that is limited to 8 bits. All efforts done for nothing!!!

The simple answer is that more data is always better then less data. Since the file stores more info about color and intensity then 8 bit per chan files, you can do things with it that are not possible with normal files. (some of which are described at the link posted by muralist) Also note that there is research going and even some prototype displays that are increasing the range of the screen to more then 8 bit.

Plus there is a method called 'tonemapping' which basically flattens the HDR down to normal range but preserves the detail of the HDR to give you a much higher quality image the you would get with LDR alone.


kawecki ( ) posted Mon, 18 December 2006 at 1:58 AM

I know this link since the past milenium, this is nothing new for me.
I don't know Poser, but even my eight years old rendering engine the colors are in floating point, in the rastering process are converted to fixed 16 bits per color (RGBA), but the final result is 16 bit RGB (not 24 bit). Last week I reworked my engine after many years of pause to give 24 bit RGB.
But you have nothing to do as the final media do not support higher dynamic range. If you compress the range the result will a distorted image. If you take the higher part the result will be that most of the scene is black (happens with photograph). If you truncate the image is good with the exception of uggly saturated white spots (happens with Poser or digital cameras).
With special high intensity monitors the bright part will hurt your eyes.
What I know is that the research of the guy of this link is something different, he use HDRI images for environmental mapping. You take a box and project six images of a cathedral in each face of the cube, your camera is inside the cube, so your scenario is the cathedral. Normal images have not enough dynamic range for all the different illuminations in each place, that's the reason of HDRI, but once you focused to camera in one part of the scenario you return to the old 24 bit RGB and the result is normal.
Of course if your camera is fixed you can put a normal image as the background, but if you want to have a 360 degree movable camera you need HDRI images mapped in a box or sphere.
That is why I asked what people are trying to say what is HRDI.

Stupidity also evolves!


tekmonk ( ) posted Mon, 18 December 2006 at 2:23 AM · edited Mon, 18 December 2006 at 2:30 AM

And I told you that you won't get a distorted image if you tonemap it properly. The algo takes the details that are normally clipped into white or black (if you just do a simple collapse at some particular exposure) and combines them into one LDR result. You don't have to take my word for it either, there is a free tonemap plugin for HDRShop you can try on various HDR images and see for yourself. Try googling for it, there used be a link on debevec's site but i dunno where it is now.

BTW most high end VFX places now use HDR renders as a matter of course, simply because they can comp and rework them much better then LDR ones. That's why ILM devd the OpenEXR format in the first place.

About the monitors i have no idea how or what they are doing, just that the tech is in dev and its for LCDs IIRC. Again, google is your friend...

EDIT : Found the HDR display one:

http://www.bit-tech.net/hardware/2005/10/03/brightside_hdr_edr/1.html

Aint progress fun :)


kawecki ( ) posted Mon, 18 December 2006 at 2:52 AM

Attached Link: http://www.acm.org/pubs/tog/resources/RTR/

An useful link

Stupidity also evolves!


tekmonk ( ) posted Mon, 18 December 2006 at 3:10 AM

As is this:

http://www.daionet.gr.jp/~masa/rthdribl/


Larry-L ( ) posted Mon, 18 December 2006 at 9:40 AM · edited Mon, 18 December 2006 at 9:42 AM

This is one of the more interesting threads I have seen the forums.  Thanks, I've learned quite a bit, especially about "BrightSide"--what a fascinating technology.  I was going to wait about a year after "Vista" hit the market (to give time for everyone to update their software for it) to build a new machine.  After reading this info, I see I may have to wait a little longer for the video cards & monitors to catch up to "BrightSide" when it hits.  I hope I have the spare change for it then.


Angelouscuitry ( ) posted Mon, 18 December 2006 at 5:42 PM · edited Mon, 18 December 2006 at 5:44 PM

HDRI makes a program work like Vue, or Bryce, where the outer limits of the "World" have color/light values that always reflect inwards, and in a number of directions.  This way reflective metal materials actually work.  By default the outer limits of the Poser Universe have no light, and are just Duds...really.

Another term for this would be Sky Dome.  In the past month or so I've done some considerable work with reflections/sky domes.

What you can attempt to do is add lights to your scene from behind/through where you light would be coming from in your 2D Panoramic HDRI image.  Light is then filtered through the image/Sky Dome and very realistic lighting is present when compositing a render of actual photography.


kawecki ( ) posted Mon, 18 December 2006 at 9:17 PM

Sky Dome and ambient illumination are not HDRI. HDRI means "high dynamic range illumination" that means any light or illumination scheme that use intensity ranges that are beyond the 256 possible values fora color with the 24 bit RGB colors.
You can map a sky dome with a normal or HDRI texture, but it is nothing more than a background sky image, it doesn't illuminate the scene.
To have a scene illuminated by the light comming from the atmosphere you need something more.
There are many solutions to the problem. One is to use a physically based illumination model taking into account the scatering of the atmosphere.
Another  aproach is to map a sphere with a texture, the mathematics involved are very simple, but you need special textures and a special light source.
First at all, one sphere is not enough, you need two spheres. If you look at the sky you see that is blue, but you are not illuminated by a blue color!
If you use a sphere painted with a blue gradient and use this sphere as a filter your scene will be blue, what is wrong!
So you need two spheres, one with blue color that doesn't act as a filter and only will give you a background image of the sky and the other sphere that will act only as filter for the incoming light, but without  any illumination visible on this sphere. This time the texture has to be with orange-yellow colors depending on the position of the sun.
As for the light source you need a special omnidirectional convergent light, a light that comes from outside in all directions and converge toward the center of the sphere.
This kind of light is, of course, not available in Poser and any array of spot lights will fail to produce the desired effect.

Stupidity also evolves!


Angelouscuitry ( ) posted Mon, 18 December 2006 at 10:22 PM

That's what I said!  ;  ) ...

*If you use a sphere painted with a blue gradient and use this sphere as a filter your scene will be blue, what is wrong!

*Yep, but I did'nt mean, ...err would'nt know how, to filter Global Illimination through each pixel of your sky image, especially with Poser's 8 Ligth Preview limit.  Rather the Sky image will have a sun in it somewhere, which would tell you where to aim your light into the scene from.


rigul64 ( ) posted Mon, 18 December 2006 at 10:34 PM

file_362963.jpg

Actually it does illuminate the scene, which is the reason they are used; and you do not need the image to appear as a background (at least in MAYA which is what I use).

here is a simple scene illuminated only by an hdr image mapped to a sphere (actually it's only half a sphere), and it is used for the lighting and reflections. There are no other lights in this scene, it is strictly illuminated by the hdr image.  I set the image to not be visible in the render as a background, only visible in the reflections.

here's two other links explaning High Dynamic Range Imaging or Image (not illumination)

http://www.highpoly3d.com/writer/tutorials/hdri/hdri.htm

http://en.wikipedia.org/wiki/HDRI


Prikshatk ( ) posted Tue, 19 December 2006 at 4:02 AM

That wiki article is slightly misleading try this one:

http://en.wikipedia.org/wiki/HDRR

regards
pk
www.planit3d.com


TrekkieGrrrl ( ) posted Tue, 19 December 2006 at 4:27 AM

OK All this HDRI talk is very interesting. I won't say I understand the details but that doesn't matter as long as I know how to use it, right?

So my question is: Is there any available HDRI pictures out there? I mean.. P7 comes with only TWO and they're really not enough to properly explore this new feature.

Alternatively... Can you make them yourself? And then, how? It looks like they need to be saved in a special way?

Please use easy-to-understand words if you're explaining me "how"... Thanks :o)

FREEBIES! | My Gallery | My Store | My FB | Tumblr |
You just can't put the words "Poserites" and "happy" in the same sentence - didn't you know that? LaurieA
  Using Poser since 2002. Currently at Version 11.1 - Win 10.



kawecki ( ) posted Tue, 19 December 2006 at 4:57 AM

Dosch has many CDs with HDRI images, if are good or bad I have no idea.

Stupidity also evolves!


Prikshatk ( ) posted Tue, 19 December 2006 at 5:02 AM

Hi TG

There are hundreds, all free... google for "HDR probe"
http://www.hdrimaps.com/
http://gl.ict.usc.edu/Data/HighResProbes/
http://www.debevec.org/Probes/
http://www.cgmill.com/olmirad/hdr.html
http://www.iseetheskyinyoureyes.com/

Try this polish site first, it has several probes from other sites:
http://www.max3d.pl/show.php?id=hdri

You may need to convert them to a format Poser understands, you can use HDRshop for that.

regards
pk
www.planit3d.com


TrekkieGrrrl ( ) posted Tue, 19 December 2006 at 6:31 AM

Thanks! Prikshatk! That was just what I needed!

Off to experiment

FREEBIES! | My Gallery | My Store | My FB | Tumblr |
You just can't put the words "Poserites" and "happy" in the same sentence - didn't you know that? LaurieA
  Using Poser since 2002. Currently at Version 11.1 - Win 10.



Angelouscuitry ( ) posted Tue, 19 December 2006 at 8:31 PM

*"Actually it does illuminate the scene,"

*rigul64 - But it does'nt change the color of objects in the scene?  So, is the light admitted on a greyscale?


Rhale ( ) posted Wed, 20 December 2006 at 3:15 AM

Thanks for the HDR probe links :)


Prikshatk ( ) posted Wed, 20 December 2006 at 4:17 AM

Hi Kawecki

I share your annoyance at the use of HDRI to describe images that are not actually HDR.

This has become more of a problem currently as photographers have taken up the term when using tone mapping to merge several images from a bracketed exposure into a single image. Which is what I thought was wrong with that first Wiki article, hijacked by photographers. Having established that there are no available HDR monitors the article still describes all the images on the page as HDRI!!!!!!

It might help to do what Paul Debevec suggested; when the multiple exposures are combined in HDRshop the resulting file should no longer be described as an "image", you should use the term "probe".

Its also worth remembering that HDR probes are for 3D render engines and were never meant to be human readable!

regards
pk
www.planit3d.com


operaguy ( ) posted Wed, 20 December 2006 at 4:37 AM

i am going to be a major lurker in this thread, bookmarking.


rigul64 ( ) posted Wed, 20 December 2006 at 10:05 AM

@ **Angelouscuitry ** The lighting information is gathered from the image's pixels. It does affects the color of the objects depending on the image used.
I'll try to explain this as simply as possible.
OK, in simple terms an HDR image contains so much more lighting information than a standard digital image that it mimics real world lighting.
When you use an image to illuminate a scene instead of the standard digital lights this is referred to as Image Based Lighting (IBL). Now you don't have to use an HDR image for IBL, but they're used because of what I stated previously.
So you have your scene and it's enclosed in a sphere (skydome) and the image is mapped to the skydome. So again in simple terms, the information stored in the images pixels are looked at and then projected out onto the scene; much like the rays of the sun and they are bounced around like the sun's rays. Thus illuminating your scene. Now I would like to say that is common to use standard digital lights in conjuction with IBL. One reason I can think of right now is to help in defining the shadows in the scene.
So this is a "watered" down explanation and hope this helps in your understanding of HDRI and IBL also.
I have set up some examples using different hdri's and also a jpeg.


rigul64 ( ) posted Wed, 20 December 2006 at 10:07 AM

file_363105.jpg

OK, so I have a light blue vase and a green sphere on a white floor. So here I used an HDRI of a brightly lit grove.


rigul64 ( ) posted Wed, 20 December 2006 at 10:09 AM

file_363106.jpg

Now here's the same scene with an HDRI of a forest.


rigul64 ( ) posted Wed, 20 December 2006 at 10:10 AM

file_363107.jpg

Here it is on a glacier.


rigul64 ( ) posted Wed, 20 December 2006 at 10:12 AM

file_363108.jpg

Here it's in a cathedral.


rigul64 ( ) posted Wed, 20 December 2006 at 10:14 AM

file_363109.jpg

And here I used a jpeg instead of an HDRI. So you can see how the different images used affects the scene. Hope this helped.


Prikshatk ( ) posted Wed, 20 December 2006 at 11:08 AM

Hey, you beat me to it!

I was preparing the same series of images, except with a Naked Vicky and a Sword, obviously! :biggrin:

regards
pk
www.planit3d.com


rigul64 ( ) posted Wed, 20 December 2006 at 11:57 AM

Oh, I thought it was a New Vase and a Sphere, my bad!! 
BTW Thanks for the links, all the HDRI's I used were downloaded from them.
Cheers!


kawecki ( ) posted Wed, 20 December 2006 at 12:52 PM

Quote - So you have your scene and it's enclosed in a sphere (skydome) and the image is mapped to the skydome. So again in simple terms, the information stored in the images pixels are looked at and then projected out onto the scene; much like the rays of the sun and they are bounced around like the sun's rays. Thus illuminating your scene.

This only works for objects that are a perfect ideal mirror. The illumination in an object is defined by its normal, the incidence angle of each light, the angle of the camera and the BRDF of its surface. Neither of this data is present in a HDRI image.
The images that you obtain can also be done by environmental mapping, that works in exactly the same way, the only difference is that in the first case you put the image in the sky dome and in the second case you put the texture on the surface of the object (mapped as a reflexion of an imaginary sky dome mirror).


You can save an image as a 16 bit png, but it will not be a HDRI image, it only will be a normal image with better resolution. HDRI images have big difference between the illumination in the bright and dark parts. You can obtain HDRI from external scenes illuminated by the sun, but don't expect to have a HDRI image of cloudy days or any interior image.

The work of Paul Debevec is very interesting, but if you look better is directed toward the reconstruction of antique objects and artitecture.

Stupidity also evolves!


rigul64 ( ) posted Wed, 20 December 2006 at 1:24 PM
  1. I never said that data is in the HDRI
  2. I stated several times that my explanation was simplified. (not everybody is a math head)
  3. It works for any object, your confusing reflectivity with illumination.

Here is the same scene with non reflective materials, clearly the HDRI IS ILLUMINATING THE SCENE.

  1. I've noticed that you post nothing but negative responses to anyone who has tried to explain HDRI, which leads me to believe that you are just trolling. So I will no longer respond to your posts.


rigul64 ( ) posted Wed, 20 December 2006 at 1:26 PM

file_363131.jpg

here is the image with non reflective materials


ThrommArcadia ( ) posted Wed, 20 December 2006 at 2:02 PM

Thanks everyone, I've really learnt something today.  My head hurts, but I've learnt.


kawecki ( ) posted Wed, 20 December 2006 at 2:32 PM

A good example of HDRI image.
Take some forest image, not a closed forest, a forest with a lot of trees and ground, but you can see the sky though the trees in less or bigger areas.
There are many excellent pictures, but these pictures are useless as backroung because the sky is white, this is unnatural!, forests don't look like this in real life.
What is happening?, what happens is that the sky is much brighter than the trees and ground, so the sky saturate the picture and the result is white. If you correct the shutter you can obtain a good image of the sky, but now the trees and ground will look very dark losing all details.
This situation has a more larger dynamic range where a normal media cannot handle.
The first question is how you can make a HDRI image, in this case an image that has the correct illumination information of the sky, trees and ground.
Here begins the problem, normal digital cameras cannot do it because it use the usual 8 bits that is not enough, so unless you have a special digital camera, you must go back and use the classic photograph. And now begins another serie of troubles.
Photographs can handle more dynamic range depending on the film quality, but once taken the pictures you must send the film to the laboratory to have the picture. Laboratories process the films in an automatic  way, a sensor takes samples of the illumination in some parts of the film and then adjust the exposition, so the result not always is good, some images are too much darker other are too brighter. The only way to have the correct image is yourself control the process or the lab work under your supervision. Can you do it?
Even in the case you achieved a good image all is destroyed when you scan the image with a normal 8 bit or less scanner, you did the work for nothing!
There is an alternative to get a HDRI image using several images. In theory if you take several photos of the samel scene, each one with a different shutter setting, you can multiple the intensity of each pixel in each image by the shutter antilog value, then average these values you can have some very near to the correct value for the intensity of this pixel.
This is the theory, but it is not easy to do. First you must have a base where you fix your camera, so the camera cannot move at all, if you move the camera all is destroyed. Once you have the assembly, you take several images in a short interval of time with different shutters and the camera cannot move a single pixel!!
Well, you have done it, it's ok now?, no yet, you must send the film to the lab, as I said before the lab process the film in an automatic way and the automatic way will destroy the correct values of your photos. The lab must process all the images in a manual way with exactly the same exposure for all images.
Well, with luck, now you have a HDRI image of the forest, is the end?, not yet.
You created the HDRI image for some use, so in the end you will need to see the image, can be in a direct way or as result of some 3d process. In the moment when you look at the result, all effort was for nothing, you will see a forest with a white sky or the correct sky with a dark without details trees and ground. Your monitor is not able to display this dynamic range and again is limited to 8 bits.
You are a lucky guy, and now you have a special monitor that allows greater dynamic range, you think that now you can see the real forest, but desillusion again!
Your eyes have the iris that acts as a shutter, when you are in the forest and your eyes looks at the sky, your iris closes decreasing the illumination coming to your eyes and you see the sky properly. When you focus your eyes to a tree, the tree is darker so your iris opens, increasing the light entering your eye and you see again the free fine.
With your HDRI image in your HDRI monitor the result is very different, as the monitor is very near, you cannot focus and adjust the iris in each element, the result will be even the illumination is correct, it will be too bright and hurt your eyes and you will be unable to look at your creation.
But all this can work very good if you project the image to a big screen far away from you, then the result will be excellent!

Stupidity also evolves!


Angelouscuitry ( ) posted Wed, 20 December 2006 at 4:55 PM · edited Wed, 20 December 2006 at 4:55 PM

file_363153.jpg

I am glad we're in the mood for reiterating, I've wanted to pound some sense into a few of Poser's different lighting techniques for some time, and especially now that HDRI actually exists in Poser.  I never stray away from Infinite lights myself, but mainly because Spot lights are the only other set I understand, and I never feel like recreating Stage/Theater, so I never use them...

Needing a Sky Dome to(at least at this degree) create Vue/Bryce worthy raytraced reflection; in Poser, with this figure of mine, was an original theory of mine, but I did'nt create the Sky Dome.PZ3 this attached image was rendered from, nor do I pretend to understand how it's illumination works.  Bushi explained to me that a Skydome was just a plain sphere, with inverted normals.  I'd exhausted several avenues before he'd explained it to me, so it did'nt take much for me to ask him for a copy of the .pz3 he'd used to post a pick, in a Poser Sky Dome thread of mine.  Thankfully, he obliged. 

Bushi set the only light in the scene to be a Point Light at 0, 0, 0.  The weird part, I do'nt understand, is; why, when I put the camera behind my figure, he's illuminated just as well as from the front, from the rear(Without my having moved the light?)  I'm still using P6, so there's no HDRI.  IBL for that light is set off, and the Material Room setting for this light doesn't show anything special. 

When I switch IBL for that light back on, I think I'm getting a brighter render, and am looking forward to doing some test renders later tonight.  I think the, full, intensity of the light may be a small wash in the render, but I'm anxious to turn it down a little to see if I get better contrast.

I also tried to run the P6 IBL Wacro on the light, but it failed?

I've definitely got HDRI, IBL, and Light Gels mixed up:

O.K, so let's play Fact or Fiction.  With Poser 7's new HDRI technology I'll be able to light my scene with just a texture, and no actual (Infinite otherwise) lights?  HDRI will work by merging light from every possible direction toward the 0, 0, 0 coordinates of the scene, like a Sky Dome?  If so, then the converging illumination does spell Global Illumination, and, if I wanted a much smoother cleaner light, I could just use a %50 Grey or White texture to feed HDRI(No shadow.)

In the past; I've stuck my background picture into the diffuse color of a sole, Infinite, lights at 0, 0, 0, through image map nodes.  This did work as Gel, and added the shadows/noise I was expecting to find on my figure.  What would be the difference between that and what IBL would have done for that light?  Could IBL have illuminated from the rear, also, maybe?

:rolleyes:

 


prixat ( ) posted Wed, 20 December 2006 at 5:08 PM

Want to make your own?
Its very simple!
http://www.drewrysoft.com/tutorials/hdri_in_DS.pdf

regards
prixat


Angelouscuitry ( ) posted Wed, 20 December 2006 at 5:21 PM

kawecki,

The example you've presented is a very well known Photographic term, you learn in Photo 101, called "Backlighting."

There is a very simple solution, using an automatic 35mm SLR camera:  

1.)  Zoom in on your subject(In your case a tree;...about half as far away as you can see.) so only it fills the camera's view.  

2.)  Read what F-Stop, and Shutter Speed the camera has chosen for that spot.

3.)  Zoom out to your crop.

4.)  When your camera tells you the F-Stop and Shutter speed have changed; switch it Manual, a set the them to what they were at your Zoom.

5.)  Fire away!

What would have been a Washed Out sky is now just kinda bright.  And what would have been a heavily shadowed figure is now as good as you're going to get it.

  


kawecki ( ) posted Wed, 20 December 2006 at 5:35 PM

Quote - Here is the same scene with non reflective materials, clearly the HDRI IS ILLUMINATING THE SCENE.

The properties of reflective and not reflective surfaces is very different, of course that the HDRI image will illuminate something, but the question is : will be correct the result for non-reflective surfaces????

  1. I've noticed that you post nothing but negative responses to anyone who has tried to explain > Quote - HDRI, which leads me to believe that you are just trolling. So I will no longer respond to your posts.

???????, it means that you don't understand a bit!


The use of HDRI or not HDRI images for the sky dome is equivalent to reflexion mapping and it must be. To understand this we must know how it all works.
For the sky dome you trace a ray going from your eye (the camera), the ray hits a point in the object's surface, is reflected and continue until it intersects the sky dome surface. Now the direction of the ray is reversed and the colour of the pixel in the intersection point with the sky-dome is used to illuminate the point of the object's surface where was initially reflected.
With reflection mapping the process is almost the same. You trace a ray going from your eye until hits a point of the object's surface, then the ray is reflected and you follow it until it intersects an imaginary sky dome, this time the sky dome doesn't exist, is only a mathematical construction, anyway the ray intersects the sky dome. The difference is that instead of taking the colour value of the intesected pixel, you take the uv coordinates of this pixel. Then you assign this uv value to the point of the surface object where the ray was reflected.
This process is done before the rendering, for rendering you use the same sky dome map this time applied to the object mapped with the calculated uv coordinates of the previous step.
You see, the process is the same and the result must be the same.
The advantage of reflexion mapping is that is much faster and you don't need ray tracing and even can be done in real time.
Ray tracing with the real sky-dome gives more accurate results when many objects exist in the scene because the ray can be intersected by another object and the image reflected in the first object will be this second object and not the sky dome. With reflection mapping the object always will have the sky dome projected on it, even in the cases that is obscured by other object, what is wrong!
For an object alone or for objects that don't obscure one each other, the sky dome and the reflexion mapping will give the same result.. You can multiply the texture by the object surface properties in each case, but the result will be not correct because both methods works for ideal mirrors that follows Fresnel's law.
If you use a HDRI sky dome for illuminating a non reflective object, it will illuminate it, but the result will be faulty.
The reason of this is, even is only one ray coming from the camera until it hits a point on the object's surface, this point is not only illuminated by the reflected ray, it is illuminated by light coming from all points of the sky dome.. To have the correct value of the illumination of the point you must integrate all the skydome pixels taking into account the angle of each ray and the BRDF function. The whole visible sky dome is illuminating the point and not only one!
The result is fake and you can use other fake methods much simpler and faster to create visual effects.

Stupidity also evolves!


kawecki ( ) posted Wed, 20 December 2006 at 5:45 PM

Quote - The example you've presented is a very well known Photographic term, you learn in Photo 101, called "Backlighting."

There is a very simple solution, using an automatic 35mm SLR camera: 

Films have much bigger dynamic range, after all in some way the HDRI images have to be done.
In my old times of camera, I didn't used automatic cameras and adjusted them by hand and eye.
As I never had a lab in my home, many good images were ruined by the lab.

Stupidity also evolves!


Angelouscuitry ( ) posted Wed, 20 December 2006 at 6:10 PM · edited Wed, 20 December 2006 at 6:14 PM

*"Films have much bigger dynamic range, "

*kawecki - Unless you're talking about Medium or Large Format Negative(About 3" x 4" and 8" x 10,") 35mm" refers to the size/type of film/negative, used by most Single Lens Reflex cameras.  I was'nt talking about digital...allthough you can bet your bottom dollar the same principal/technique applies!

*"after all in some way the HDRI images have to be done."

To get a 360 degree 2D Photographic image is impossible in one shot, but it does'nt take any special Film or Camera.  Fish Eye, and Panoramic Lenses will help, but what's needed is a program to help composite the seperate shots, and then Squish the Top and Bottom of the new image to map onto a sphere nicely.

* "I didn't used automatic cameras and adjusted them by hand and eye."

What tool you use to get your F-Stop and Shutter Speed is irrelivant.  You could just as well walk up to your subject with a light meter, measure the distance between it and your camera, and then calculate the Shutter Speed, but if you have an Automatic Camera doing this for you, then it will display , for you, whats going on.  Likewise it will allow you to make changes.  Automatic cameras are'nt always correct.  This is one instance where they are almost always off.

😉


Angelouscuitry ( ) posted Wed, 20 December 2006 at 6:31 PM

"For the sky dome you trace a ray going from your eye (the camera), the ray hits a point in the object's surface, is reflected and continue until it intersects the sky dome surface. Now the direction of the ray is reversed and the colour of the pixel in the intersection point with the sky-dome is used to illuminate the point of the object's surface where was initially reflected."

I do'nt think thats geometrically sound.  I know I've heard of a Ray as being something different than a Light Ray, and in referance to a the Camera, but Cameras do'nt emit light. You need to angle you light object so that it bounces/reflects into your camera, or your scene is rendered null/black. This is sometimes confusing, but I generally think in terms of Light Rays, becasue they are a natural law of science that a render engine could'nt do without...


carodan ( ) posted Wed, 20 December 2006 at 8:36 PM

What I can't quite work out is how to best work with HDRI in P7. So far all my experiments have yielded very unpredictable results requiring a lot of fiddling with settings to achieve any half reasonable renders. 
I kind of thought they might be more straitforward - like attach to the colour channel, set to 1 and go. But so far they're far from this.
They always render far too exposed an image, or totally over-saturate colour etc.
I must be missing something.

Any tips would be much appreciated.

 

PoserPro2014(Sr4), Win7 x64, display units set to inches.

                                      www.danielroseartnew.weebly.com



kawecki ( ) posted Thu, 21 December 2006 at 12:00 AM

Quote - *"Films have much bigger dynamic range, "

*kawecki - Unless you're talking about Medium or Large Format Negative(About 3" x 4" and 8" x 10,") 35mm" refers to the size/type of film/negative, used by most Single Lens Reflex cameras.  I was'nt talking about digital...allthough you can bet your bottom dollar the same principal/technique applies!

It's not the size of the film, it's the quality of the film and how is sensible. A normal 35mm film is able to store a wider range of intensities than a normal digital camera. A 1000 ASA films handles much better dark areas than a 400 ASA film, but it is also more easily overexposed.
I am not a professional photographer, a professional photographer has many different cameras and many different films to be used for each situation.

*Quote - "after all in some way the HDRI images have to be done."

To get a 360 degree 2D Photographic image is impossible in one shot, but it does'nt take any special Film or Camera.  Fish Eye, and Panoramic Lenses will help, but what's needed is a program to help composite the seperate shots, and then Squish the Top and Bottom of the new image to map onto a sphere nicely.

Panoramic images are one thing and HDRI images are another. A panoramic image can be or cannot be a HDRI  image. Most of the panoramic images are normal images and most of the HRDI images are not panoramic images, but any other combination can exist.

*Quote - "I didn't used automatic cameras and adjusted them by hand and eye."

What tool you use to get your F-Stop and Shutter Speed is irrelivant.  You could just as well walk up to your subject with a light meter, measure the distance between it and your camera, and then calculate the Shutter Speed, but if you have an Automatic Camera doing this for you, then it will display , for you, whats going on.  Likewise it will allow you to make changes.  Automatic cameras are'nt always correct.  This is one instance where they are almost always off.

Automatic cameras are very easy to use and in most cases are ok, but it is an automatic process where the camera decides what to do and in some cases is not what you want to do.
If you are not able to turn off the automatic process then you cannot use this camera for some scenes.
With some little experience you don't need any photometer or anything, you know very well how to set the distance, speed and shutter for what you want to photograph.

Quote - I do'nt think thats geometrically sound.  I know I've heard of a Ray as being something different than a Light Ray, and in referance to a the Camera, but Cameras do'nt emit light. You need to angle you light object so that it bounces/reflects into your camera, or your scene is rendered null/black. This is sometimes confusing, but I generally think in terms of Light Rays, becasue they are a natural law of science that a render engine could'nt do without...

The path of light is symetrical for the direction. You can trace the ray paths from the lights to the camera or reverse the process and trace the ray from the camera to the light. The result is the same.
In many case the reversed path is much easy to be done. If you trace the rays from the light to the camera you will trace many rays that never will reach the camera, so you wasted your time calculating these ray. On the other side if you trace the ray from the camera you can be sure that your effort will not be wasted, because this ray always reach the camera.

Stupidity also evolves!


operaguy ( ) posted Thu, 21 December 2006 at 12:24 AM

rigul64, were your examples above rendered in Poser7?

kawecki or Angelouscuitry or anyone...

Has anyone actually been able to produce a great-looking result using the P7 HDRI feature?

Some people are saying it will not be very impressive because in Poser, the specular is not addressed in its implementation of IBL (if I've got that terminology right).

Anyone with images?

::::: Opera :::::


ThrommArcadia ( ) posted Thu, 21 December 2006 at 1:12 AM

I tried rendering something with one of the default HDRI set ups.  It was a little test render of the Robkitty that comes with P7.

Nothing else in the scene, just the robot.

After 4hours it had only rendered a fraction of the cat's head, which was way too dark (I had the thing set to all reflective, with a purple diffuse).

Anyway, I lost patience and cancelled the render.

(AMD2600, 1.25GB RAM, etc...)


kawecki ( ) posted Thu, 21 December 2006 at 2:36 AM

Quote - kawecki or Angelouscuitry or anyone...

Has anyone actually been able to produce a great-looking result using the P7 HDRI feature?

Don't expect it fromme for some time, I'm still with Poser 4 and 5!!!
Maybe I can do something, but by other means.

Stupidity also evolves!


Angelouscuitry ( ) posted Thu, 21 December 2006 at 4:31 AM

"It's not the size of the film, it's the quality..."

I have taken years of Photography classes, in college, and that's just not true for two reasons:

1.)  With a bigger negative(Large Format; over Medium Format, over 35mm, etc.) you actually record more information.  Likewise you can expose much larger prints, are get bigger digital scans.  Having a Large Format camera vs. a 35mm, would be comparable to have a digital camera with two Flash Cards, but where one Flash card is 1GB the other is 64MBs.

2.) The ISA rating of film is'nt a measure of quality.  100 speed film is'nt much, or at all more expensive than 1000.  ISA is a measure of speed, but you do'nt always want or need it.  Indoor pictures, stuff at night, still shots, etc. require slow(100) film.  While images with the Sun, Backlit images, or images from a Race Track, etc. are when you would use Fuji 1000.

"Automatic cameras are very easy to use and in most cases are ok, but it is an automatic process where the camera decides what to do and in some cases is not what you want to do."

That's what I said !  ... ;  )

" If you are not able to turn off the automatic process then you cannot use this camera for some scenes."

Automatic cameras are'nt cheap.  I generally do'nt think it's possible to have an automatic camera that won't also function in Manual mode.  Unless a User is just to lazy to read the manual, and figure out how to make the switch.

"With some little experience you don't need any photometer or anything, you know very well how to set the distance, speed and shutter for what you want to photograph."

There is never an excuse for not having a/your light meter with.  You'll always see a good photographer/director with a little Golf ball looking doodad hanging around is neck.  It's just a tool of the trade.  Not having one would be like playing Baseball without a mitt.  Sure you could do it, but eventually you'll pay.

"The path of light is symmetrical for the direction...because this ray always reach the camera."

That is what I first read of this, on a Metacreations page somewhere, back in the days of Poser3...I still do'nt think the Vector path of the light, backwards, is all too key though.  Sure you'll trace light rays that lose their charge, but then you've found a coordinate to map, so what if it's dark?

"rigul64, were your examples above rendered in Poser7?"

I think he said that was Maya.

" Has anyone actually been able to produce a great-looking result using the P7 HDRI feature?"

I am also hoping someone can show us some examples, and screenshots too!

For now I'm still waiting for my copy of P7 to be shipped.


rigul64 ( ) posted Thu, 21 December 2006 at 9:49 AM

It was Maya, using Mental Ray.


carodan ( ) posted Thu, 21 December 2006 at 10:22 AM

IBL's can't 'see' anything specular in P6 or P7, so I'd have to conclude that even using .hdr files with them is going to be limited. IBL's seem to 'blur' any image maps plugged into them to the extreme. Not sure whether they interact with .hdr in the same way. 
Better than nothing I guess, but until specularity is dealt with (most likely increasing render calculations) I can't see Poser's HDRI implementation as much more than a gimmick.

 

PoserPro2014(Sr4), Win7 x64, display units set to inches.

                                      www.danielroseartnew.weebly.com



kawecki ( ) posted Thu, 21 December 2006 at 9:59 PM

Quote - "The path of light is symmetrical for the direction...because this ray always reach the camera."

That is what I first read of this, on a Metacreations page somewhere, back in the days of Poser3...I still do'nt think the Vector path of the light, backwards, is all too key though.  Sure you'll trace light rays that lose their charge, but then you've found a coordinate to map, so what if it's dark?

The light rays do not exist, they are only a geometrical construction. In the end you will have a path starting in the light source and ending in the camera, or starting in the camera or ending in the light source, direction of flow doesn't matter. Even you can exchange the camera with the light source and the result is the same. You only trace the path, calculate how each object that is in the ray path affects the light and the resulting intensity and color is what you display for every pixel of your rendering.
The rays are nothing more than mathematical abstractions that produce a real result.

Stupidity also evolves!


Privacy Notice

This site uses cookies to deliver the best experience. Our own cookies make user accounts and other features possible. Third-party cookies are used to display relevant ads and to analyze how Renderosity is used. By using our site, you acknowledge that you have read and understood our Terms of Service, including our Cookie Policy and our Privacy Policy.