Sat, Jan 18, 5:16 PM CST

Renderosity Forums / Poser - OFFICIAL



Welcome to the Poser - OFFICIAL Forum

Forum Coordinators: RedPhantom

Poser - OFFICIAL F.A.Q (Last Updated: 2025 Jan 18 10:25 am)



Subject: Linearized camera model - ground plane


kawecki ( ) posted Tue, 20 July 2010 at 6:05 PM · edited Sat, 18 January 2025 at 5:14 PM

BACKGROUND
The camera model that Poser use is wrong, well not only Poser, other 3d engines too.
The camera model that Poser use is the same as a photograph machine, but human eyes are not a photo camera. What our eyes see is not the same as what is captured in a photograph.
Our eyes are able to see near and far objects at the same time, with a camera we have to adjust the focal distance and lenses for near or far away objects, but we cant have both at the same time.
The consequence is that we have blurred and distorted images near or far and the distances shown in the picture are not the real distances.
Translating the problem to Poser, the subject of this thread, the most annoying problem is with the ground plane. The texture of the ground plane that is very near the camera looks very bad, is horrible and distorted and not only very near, far away it also looks bad. In the middle range is ok.
We can improve the rendering using big or huge high resolution textures for the ground plane. As there are much more pixels in the texture the distortion of the near pixels is not so horrible.
Other way is to tile the ground plane to avoid the use of huge textures.
In this way we can solve in some way the near ground, but the far away ground plane texturing continue to look bad and nothing can be done.

The research
Looking at the texture near the camera of the rendering I observed that the pixels are very stretched vertically, more nearer more stretched and in the far away ground the pixels are very compressed. If we divide the texture image in square elements the shape of each element is not preserved. Near the camera each element looks as a vertical dash, far away looks as a line and in the middle is more or less a square. Also the size of each element decreases in a non linear way as get away from the camera.
By comparison, looking with my eyes al around I found that the shape is more or less preserved and the size of each element decrease in a linear way as the distance from my eye increase.

Mathematics
The camera model of Poser or photo machine is
x1 = A.x.f(z)    y1 = B.y.f(z)    where x,y,z are the object coordinates and x1,y1 are the coordinates of the projection plane and
f(z) = d/(d + z)  where d is the distance from the camera to the projection plane.
From this we can find the size of an element decreases with the square of the distance (not - linear)  d f(z)/dz = 1/z^2

Human eye camera model
Premise: As I observed the size of an element must decrease with distance in a linear way
d f(z)/dz = 1/z
One function that fulfill this is f(z) = ln(z), but this is not correct because the function must be f(infinity) = 0 and logarithm is not.
Another alternative can be f(z) = 1/z^0.1 that behaves similar to logarithm and gives a zero value for z = infinity.
Even so, withim some range of z both functions can be used.
Research in progress

What to do
As I cannot change the camera of Poser and have no ways to do it unless have the source code, the only way is to change the ground plane.
I can make a ground plane textured in a way that the texturing of the plane multiplied by the Posers camera model gives the human eye camera model.

Stupidity also evolves!


kawecki ( ) posted Tue, 20 July 2010 at 6:11 PM

file_456359.jpg

**Preliminary result.** Take a look at the image, the ground looks very natural. The pixels near the camera are not too much distorted and are acceptable and the pixels near the horizon looks very good. The ground plane is tiled to give a good resolution. I have only corrected in the vertical direction, some correction is still need in the horizontal direction specially far away. Too much things to be done yet.

Stupidity also evolves!


bagginsbill ( ) posted Tue, 20 July 2010 at 6:12 PM · edited Tue, 20 July 2010 at 6:22 PM

The senser in the human eye is a hemisphere, not a plane. That's why you don't see the same distortion as a camera produces.

This is easily dealt with in cameras using a fish-eye lens, which can be simulated in Poser by using refraction on a shaped object placed in front of the camera. However, you will find that straight lines become curved in that case. They actually are curved on the retina, but the brain converts that information into the perception that the lines are really straight.

You won't get the same effect unless you print the render on the inside of a huge sphere and you stand in it.


Renderosity forum reply notifications are wonky. If I read a follow-up in a thread, but I don't myself reply, then notifications no longer happen AT ALL on that thread. So if I seem to be ignoring a question, that's why. (Updated September 23, 2019)


kawecki ( ) posted Tue, 20 July 2010 at 6:29 PM

I really don't know how the eyes work, too much mysteries to be discovered yet.
But I can play with mathematical functions and find one that matches what the eyes see. The functions even not need to be physically correct, the Phong model is one example, but if is simple and works then is ok.
Another mystery, why don't exist gray or brown lights, and how do we know what is gray and what is white?

Stupidity also evolves!


LaurieA ( ) posted Tue, 20 July 2010 at 6:39 PM

And to think I'm still searching for the meaning of life...

:oP

Laurie



ockham ( ) posted Tue, 20 July 2010 at 7:14 PM

As BB said, the brain function is the key, and you won't be able to come anywhere
near duplicating that in a camera equivalent. 

Bear in mind that the retina is hugely imperfect.  The middle zone (fovea) is actually
missing, and there are many other distortions and gaps in an adult eye, worse as we
get older.  With a scene made of familiar elements, the brain fills in those missing
points and reconstructs what the external patterns "should" look like.  (Luckily, as we
get older the brain also knows more about familiar elements!)  You can lose a tremendous
number of original "pixels", or have a tremendous amount of distortion in the lens and
retina, before the perceived result starts to deteriorate.  It's magic and miraculous.

My python page
My ShareCG freebies


Miss Nancy ( ) posted Tue, 20 July 2010 at 7:14 PM · edited Tue, 20 July 2010 at 7:21 PM

in poser 4 and later, ISTR  distance from front of camera to  focal plane
was 50 mm, but I can't recall how it were deduced.  human retinal cells ( rods or cones)
respond to luminance and wavelengths in some manner mediated by some part of  brain.
grey is white of low luminance and brown is dark red/orange.  "meaning of life" is to produce, thru
process of evolution, a sapient life-form that can create the higgs boson, at which point it
starts over again. also known as "big bang theory".

p.s. just to agree with ock's and WW's statements following this, interpretation of colour by human
brain is highly subjective and subject to trickery.  zappa was an highly innovative songwriter of
his day (OT).



ockham ( ) posted Tue, 20 July 2010 at 7:16 PM

As for gray, white and brown, we don't have any firm standards for those.  It depends
entirely on whether the nearby objects are darker or lighter.

http://en.wikipedia.org/wiki/Simultaneous_contrast

My python page
My ShareCG freebies


WandW ( ) posted Tue, 20 July 2010 at 7:17 PM

When I was a kid I had a black light... 😉

Pure gray is function of luminance, not of colour.  In the colour gamut visible to the human eye, chroma decreases with luminance because the colour sensing cone cells are not as sensitive as the luminance sensing rod cell, so colour goes away as the light level decreases.

I suppose you could have a brown light, though.  Try mixing red with some green.  Makes me think of an old Zappa song: "Ronnie saves his noomies on the window in his room--a marvel to be seen--dysentery green..." :ohmy:

----------------------------------------------------------------------------------------

The Wisdom of bagginsbill:

"Oh - the manual says that? I have never read the manual - this must be why."
“I could buy better software, but then I'd have to be an artist and what's the point of that?"
"The [R'osity Forum Search] 'Default' label should actually say 'Don't Find What I'm Looking For'".
bagginsbill's Free Stuff... https://web.archive.org/web/20201010171535/https://sites.google.com/site/bagginsbill/Home


kawecki ( ) posted Tue, 20 July 2010 at 9:47 PM · edited Tue, 20 July 2010 at 9:54 PM

Not so easy with gray colors.
A white color can have RGB = 255,255,255
A gray can be                RGB = 128,128,128
And black is                  RGB =     0, 0,0   
So the colors black to white passing through grays all have R=G=B and the only difference between them is the intensity, so a gray color is a white with lower intensity.
Now let us do a simple experiment:
Take two sheets of paper, one white and one grey, and put them on the desktop. You can see that one is white and the other grey, obvious. The gray paper reflects less light than the white and so looks grey.
Now take another white sheet of paper and put it under your desktop, it is dark there, very much less illumination than over the desktop. Look at the sheet, it is still white and didn't turned grey even it reflects very much less light than the grey sheet  on top of the desktop. Make any experiment and you will find that the white paper always remain white, grey always remain grey unless is very dark and then all turns black.
Illuminate the gray paper with a strong light, it still will remain gray no matter how intense is the light.
We always are able to know what is white and what is gray no matter the illumination.

In physics there is a big difference between a white and a gray color beside equal RGBs.
A white color is one that reflects all the light that receives, a black color is one that absorbed all illumination and a grey color is something in the middle, it reflects something and absorbed something.
We can perfectly define a color knowing the absorption coefficients of the RGB components, intensity doesn't matter.

How the hell the eye is able to measure the absorption coefficient?

Stupidity also evolves!


bagginsbill ( ) posted Tue, 20 July 2010 at 10:10 PM

file_456366.jpg

What you describe is demonstrated in one of my favorite illusions.

http://en.wikipedia.org/wiki/Same_color_illusion

In that illusion, the square A and B are the same luminance, but we perceive otherwise.

I have made an even more extreme version of it. In mine, A appears to be brighter than B. Yet A is RGB 90 and B is RGB 168. The A square is very much darker than B despite what our eyes tell us.

Nearby I have placed a sphere. It appears to be brighter than B, but in fact the sphere is darker than B over almost all of its surface.


Renderosity forum reply notifications are wonky. If I read a follow-up in a thread, but I don't myself reply, then notifications no longer happen AT ALL on that thread. So if I seem to be ignoring a question, that's why. (Updated September 23, 2019)


kawecki ( ) posted Wed, 21 July 2010 at 1:21 AM · edited Wed, 21 July 2010 at 1:22 AM

Quote - This is easily dealt with in cameras using a fish-eye lens, which can be simulated in Poser by using refraction on a shaped object placed in front of the camera. However, you will find that straight lines become curved in that case. They actually are curved on the retina, but the brain converts that information into the perception that the lines are really straight.

There is no distortion of the image projected inside a sphere, so nothing to correct. The problem of fish-eye lens is that the final destiny is a plane (picture) and there is no way to represent an image on a sphere to a plane without distortion.
Using a sphere as the projection plane has another advantage, it preserve the illumination of all points of the image, the light incidence in each point is in normal direction. This doesn't happen if we use a plane where the light ray turns more oblique as we go away from the center turning the illuminance lower (Lambert law).

Stupidity also evolves!


TrekkieGrrrl ( ) posted Wed, 21 July 2010 at 1:38 AM

 When you say the eye can focus on both near and far at the same time, you're mistaken. The eye is just quick at adapting, and less so as you age. I assure you that while I type this, my keyboard is in focus but the rest of the train comopartment where I am is very much blurred. Since I'm slightly myopic, the effect is worse than on a "normal" eye, but still visible. You just don't think about it because whatever you LOOK at is in focus.

As your eyes get older and he lens is getting less flexible, it takes longer to adjust, and also you get more ...er whatever the opposite of myopic is in English... But still, in general, what you look at is what you focus at. Funny thing is, you can consciously blur your vision by "staring into space" but the opposite - to make everything sharp, isn't possible.

That said, I agree that the lack of depth is what makes most renders look fake. And because it's something your eyes usually do automatically, it takes a consious effort to even realise. It's one of those niggling things where you know there's something wrong but you can't put your finger on it.
 

FREEBIES! | My Gallery | My Store | My FB | Tumblr |
You just can't put the words "Poserites" and "happy" in the same sentence - didn't you know that? LaurieA
  Using Poser since 2002. Currently at Version 11.1 - Win 10.



kawecki ( ) posted Wed, 21 July 2010 at 3:05 AM

You set the focus to see the details. If you set the focus to read something near your eyes you still continue to see all, but with less detail outside the focus. The image out of focus is not blurred as in photos, it is foggy.

My eyes are a mess, I am myopic and anti-myopic (which is the name????) at the same time.
I read at a distance that a normal vision person reads, something a little far away is in fog (myopic) and I cannot read small letters or something near (anti-myopic). My right eye is unable to see details, I read with only one eye. The only thing that don't have is astigmatism!

Stupidity also evolves!


EnglishBob ( ) posted Wed, 21 July 2010 at 8:27 AM

Quote - ...anti-myopic (which is the name????)

Long sight is called hypermetropia. I'm not an optician's son for nothing. :-)

The other word you may need later on is presbyopia, or "old sight" - where your eyes lose their power to focus at varying distances. Then it's bifocal time!

It's very rare for a human eye not to have some degree of astigmatism, by the way. It may be at a low enough level to not be worth correcting, though.


LaurieA ( ) posted Wed, 21 July 2010 at 12:47 PM

I'm presbyopic!

Makes me sound religious somehow ;o).

Laurie



TrekkieGrrrl ( ) posted Wed, 21 July 2010 at 3:40 PM

 I' slowky, but steadily turning int one o' them presbyopic, too! Presbyopics of the world, UNITE!

Sometimes, I see something best with my glasses on, sometimes I have to peer over the rim of them.. and sometimes, neither will bring it into focus >_< Bifocal time.. yea.. but I don't wanna admit it L That stuff is for OLD people!

FREEBIES! | My Gallery | My Store | My FB | Tumblr |
You just can't put the words "Poserites" and "happy" in the same sentence - didn't you know that? LaurieA
  Using Poser since 2002. Currently at Version 11.1 - Win 10.



LaurieA ( ) posted Wed, 21 July 2010 at 5:46 PM

 I needs the bifocals, I admit it. But they're gonna be the no-line ones so that no one has a clue...lmao. I wonder how long it will be be before I stop tripping over curbs and tapping ppls rear bumpers? I'm sure they don't take long to get used to ;o).

Laurie



SamTherapy ( ) posted Wed, 21 July 2010 at 6:45 PM

Quote - I'm presbyopic!

Makes me sound religious somehow ;o).

Laurie

Me too.  And all these years I thought I was heterosexual. :)

Coppula eam se non posit acceptera jocularum.

My Store

My Gallery


Privacy Notice

This site uses cookies to deliver the best experience. Our own cookies make user accounts and other features possible. Third-party cookies are used to display relevant ads and to analyze how Renderosity is used. By using our site, you acknowledge that you have read and understood our Terms of Service, including our Cookie Policy and our Privacy Policy.