Fri, Nov 29, 12:51 PM CST

Renderosity Forums / Poser - OFFICIAL



Welcome to the Poser - OFFICIAL Forum

Forum Coordinators: RedPhantom

Poser - OFFICIAL F.A.Q (Last Updated: 2024 Nov 29 7:57 am)



Subject: PhotoRealism....why don't we have it yet?


Photopium ( ) posted Sun, 11 February 2007 at 10:33 PM · edited Fri, 29 November 2024 at 12:48 PM

I was just sitting here thinking....shouldn't we have this by now?

So I'm curious what everyone's thoughts are on this topic.  What is preventing us from having easy-access photo-realism at this point?
What are some possible solutions to the roadblocks?
When do you think we'll break through the glass ceiling?

I'll offer up some of my thoughts.  Models have enough polys.  Texture maps are high enough res without killing processor/memory.  Those are certainly two very important things.  

How about one-touch, easy to understand realistic lighting?  No parameters to set, or very few.  No having to link them to shaders or anything ultra-techy like that.  It goes without saying that realistic lighting should produce realistic shadow-fall.  Radiosity, specularity, all that stuff should be assignable based on simple, real-life conditions.  For example:  Object is stainless steel...here's your setting.  One-touch and it's set.  

The other angle is the render engine itself.  We know from other software out there that realistic renders are not only possible, they are being done all the time, but the trick is you have to know an infinite amount of variables and check off endless boxes.  Rendering realism still takes an enormous amount of time, which leads me to believe software just simply isn't making good enough use of processor speed.

those are my thoughts, am anxious to read yours.
-WTB


Robo2010 ( ) posted Sun, 11 February 2007 at 11:05 PM

I agree. I work very hard on trying to get realistic renders. Although I can sorta, I am limited on just a character alone in a scene to do so. I try to make scenes realistic enough with a character, but I end up with error messages (low memory), when in fact my poser(6) does meet the system requirements 3x (recommended). I have moved away from V3, M3 characters to vehicles, other things and even trying to make a landscape in Poser(6). Yes, their is mountain here and there, and some scene stuff at Daz. I do have those, but they do not cut it. I dreamed of taking a mesh landscape out of Bryce 5. But was told/read that is impossible. The program is not made to do such things. Or it can, but no textures and you have follow these steps. City scenes are the same. I have aircrafts, and the pain of thought if I have to animate the aircraft going over hills, town and a city. I will have nothing, and it will be very short. If I had thousands of dollars or win the lottery (dreaming), I will purchase 3DS max. But I am person struggling financially for/with my family. Animating is my dream, but I haven't even done so yet due to poser(6) issues. And the cost to go to school here is tremendous. Only my avatar is the furthest for animating. 


tekmonk ( ) posted Sun, 11 February 2007 at 11:13 PM

Quote - How about one-touch, easy to understand realistic lighting?  No parameters to set, or very few.

Something like Maxwell already does this:

http://www.maxwellrender.com/

It renders out photoreal images without needing any kind of complex setup. Just create your meshes, assign real world materials to them like 'chrome' or 'concrete' etc, add a sun or a light with a wattage and push render. It does all the rest. Its quickly becoming the standard in all kinds of architectural work.

As for speed, well you have to be realistic here. You are trying to mimic the very complex interaction of light with material and media. I mean this is the kind of stuff that gives quantum physicists and supercomputers a nightmare. So of course you are gonna have a tough time simulating it on a mere home computer.

CG apps have basically taken 2 different approaches to solving the problem. One is to make it as simple to use as possible, which is fast to use but inefficient and takes huge render times. The other is to make it as flexible as possible, which is hard to setup but can be very optimised and renders fast. Till now most apps have followed the fast and flexible approach, but some recent ones like Maxwell are also going the other way since CPU speeds are now getting fast enough to accommodate the inefficiency. It is still quite a bit slower then a nicely tuned render though.

As for when we get all this in Poser, well your guess is good as mine there :)


jfbeute ( ) posted Mon, 12 February 2007 at 12:58 AM

One thing to remember is that taking a good photo takes a lot of work, taking a snapshot is easy but creating the perfect photo is far from easy and quick. With renders we often want to create situations that are impossible in one way or another.
Combining the difficulty of making the perfect photo with the impossible picture requires great flexibility in setting up a scene with all objects, lights and shadows in just the right spot.

We don't really want real photo realisme, we want the impression of realisme and we want it easy and quick.

My conclusion: Dream on !
It is never going to be easy or quick but some illusions can be created quicker and easier than is nowadays possible. We will always want it easier, quicker and better looking. Make the best of what we got now and great results are possible.


bantha ( ) posted Mon, 12 February 2007 at 1:41 AM

Firefly does neither radiosity nor sub surface scattering. For realistic images, both have to be faked. I assume, that would be the two main problems in creating truly photorealistic renders.


A ship in port is safe; but that is not what ships are built for.
Sail out to sea and do new things.
-"Amazing Grace" Hopper

Avatar image of me done by Chidori


kawecki ( ) posted Mon, 12 February 2007 at 3:10 AM

Something is wrong, that is what I'm trying to find for some time.
One thing that I discovered is that 3d rendering are based on the camera model, but our eyes are not a photographic machine.
What see your eyes is not what a camera see, photography is an art by itself.
You take a photo of  something that your eyes see that is a shit and in the picture is a piece of art, or you take a photo of something that is beautiful for your eyes and what you see in the image is a piece of crap.
Eyes and camera are not the same!

Stupidity also evolves!


kawecki ( ) posted Mon, 12 February 2007 at 3:32 AM

Want something more to think?
Why do we see white or grey? White color is when the value of R, G and B is the same, with grey color is the same.
You can think that the explanation is obvious: white is RGB = 255,255,255 and a shade of grey can be 128,128,128 so it is different.
Ok, now take a better look at the values, the difference between white(255) and grey(128) is the intensity of light, so if the difference is the intensity if we illuminate with more intense light a grey object it will become white.
Let go to practice, look in a room with closed windows, you will see grey objects unless is very dark to see anything, now open all the windows and let the sun light enter, what was a grey object continue to be grey and what was white continue to be white even the illumination in the room increased one hundred times!
Some variable is missing!

And what about the brown color?,  lights that emit brown color do not exist!!!!

Stupidity also evolves!


RorrKonn ( ) posted Mon, 12 February 2007 at 3:34 AM

If ya 3D tools are not good enough,get better tools.

Get a main 3D app like C4D

http://www.maxon.net/pages/dyn_files/dyn_htx/htx/welcome_e.html

and a plug like BodyStudio

http://www.e-frontier.com/go/poser/addons

 

 

RorrKonn
http://www.atomic-3d.com

============================================================ 

The Artist that will fight for decades to conquer their media.
Even if you never know their name ,your know their Art.
Dark Sphere Mage Vengeance


RorrKonn ( ) posted Mon, 12 February 2007 at 3:54 AM · edited Mon, 12 February 2007 at 3:55 AM

Forgot zBrush has micro displacement for killer textures.

http://www.pixologic.com/zbrush/home/home.php

 

RorrKonn
http://www.atomic-3d.com

============================================================ 

The Artist that will fight for decades to conquer their media.
Even if you never know their name ,your know their Art.
Dark Sphere Mage Vengeance


tekmonk ( ) posted Mon, 12 February 2007 at 4:38 AM

I agree, a better renderer then firefly definitely helps. Not that firefly is bad exactly, but it is so slow that it makes it very hard to try out lots things in it. So most people just stick with the defaults or whatever few light sets they buy. Which of course is not gonna look amazing. I know i got a lot better at lighting when i added XSI to my work. Nothing beats trying out different light/texture settings and doing a quick render as a learning tool.

Quote -
Firefly does neither radiosity nor sub surface scattering. For realistic images, both have to be faked. I assume, that would be the two main problems in creating truly photorealistic renders.

I wish there was an easy way to write shaders for firefly, like you can with other renderman renderers. There are some very cool shaders for SSS and such that i would love to port to it.  A wrap light shader would also rock. Ah well maybe eF will do something about this in future versions.


pjz99 ( ) posted Mon, 12 February 2007 at 4:54 AM

Well frankly, reality has some problems:

  • There are no airbrushes in reality
  • People are short and tubby in reality

I expect that the reality development team will eventually work these bugs out, but until then they're always going to be problematic.

My Freebies


jonthecelt ( ) posted Mon, 12 February 2007 at 5:08 AM

Also, you gotta remember that you're asking a lot from a program that costs $200. Companies like Square, Pixar, Dreamworks and so on work for AGES trying to get something that looks and feels right - you think they have any easy 'one-click' solutions for their lighting or materials? And the amount of money that has gone into their software and hardware farms would make your eyes water. So it's a little redundant asking why Poser, which is at its heart an entry level basic program (which I love dearly), cannot do the same things as the big boys.

jonthecelt


jonthecelt ( ) posted Mon, 12 February 2007 at 6:31 AM

Ok, this si trange... I get an email notification that robo2010 has repied to this... the list of threads says that he's the last prson to reply... and yet I can't see his post under mine... something peculair going on in Renderosity-land... :-/

jonthecelt


RedHawk ( ) posted Mon, 12 February 2007 at 6:38 AM

...could be he simply deleted his post...

<-insert words of wisdom here->


Dale B ( ) posted Mon, 12 February 2007 at 6:48 AM · edited Mon, 12 February 2007 at 6:51 AM

Anyone curious about why we don't have one click lighting and photorealism would do good to hunt out a TPB titled [digital]Lighting and Rendering, 2nd edition, by Jeremy Birn. He's a technical lighting director at Pixar (two of his credits are The Incedibles and Cars), and read the book. It is pretty much render application neutral, and the thrust of it is achieving the effect, not simulating the real world. How? You cheat. You learn how real lighting works, and then you find the most efficient cheat. Real world lighting is an -analog- process. Digital simulations of an analog process involve complex simulation formulas and floating point math that can go out to 200-300 places to the right of the decimal point before the granularity of the digital process is fine enough to be ignored. Probably the first, and classic cheat he shows in passing is the table lamp and shade. Lots of newbies get frustrated with this kind of mesh; they think all they have to do is plop a point light where a bulb would go in real life, and it works just like the real thing. Except it doesn't, due to facts like light bulbs are acting as diffuse scattering shells for a source of light that is actually linear (look at a filament; not just a point in space, is it). Lampshades typically have white interiors of translucent cardboard, reflecting much of the light from the bulb back inwards (and bleeding some light through for decorative purposes, depending on the kind of shade it is. How much time would it take to model that shade with optically accurate materials or shaders applied to the required layers?), to eventually win free out the top or bottom of the shade, where interactions with the atmosphere and particulates create the perceived cones of light (and I'm not going to even try to get into atmospheric interactions....) The most common cheat? A point light and two spots, one pointing up, one down. The point basically just illuminates the shade for the bulb 'hotspot' and provides a bit of fill; the spots are what actually generate the light cones, and you have far more control over what is actually happening regarding your scene (you want a party environment? Keep the lamp spots at the same intensity, or make the upper one slightly brighter, to give a smoother fill to a scene. Need a dramatic desk shot? Dim the upper and increase the lower so that the eye is drawn towards the brighter area). You also have to define what you mean by photorealism. Like a picture snapped of outdoors? There are so many photons out there interacting with so many reflective and refractive and translucent surfaces and elements, the practical number of interactions may as well be infinite...multiplied over time and any motion of any of those elements. A studio portrait? You have floods, fills, rims and kickers being used in those....and if you use them in a CG app, you can create the illusion of that kind of lighting, just like a portrait photographer creates their illusions. And maybe most importantly, forcing some standard of realism is actually limiting control, not improving it. It may be a shortcut to one specific effect...but at the cost of flexibility and being able to break the 'rules' for a dramatic effect. Tanstasfl......


Tomsde ( ) posted Mon, 12 February 2007 at 9:57 AM

Poser can produce photorealistic results with the right knowledge of the tools.  No it's not quick and easy.  I highly reccommend The Secrets of Poser Artists book, it's inexpensive and it goes through the process of getting a photorealitic render step by step.  The book is wonderfu and inexpensive, I learned so much from it I want to read it again.


AnAardvark ( ) posted Mon, 12 February 2007 at 2:32 PM

Quote - Want something more to think?
Why do we see white or grey? White color is when the value of R, G and B is the same, with grey color is the same.
You can think that the explanation is obvious: white is RGB = 255,255,255 and a shade of grey can be 128,128,128 so it is different.
Ok, now take a better look at the values, the difference between white(255) and grey(128) is the intensity of light, so if the difference is the intensity if we illuminate with more intense light a grey object it will become white.

 
Human vision is also highly influenced by edge effects, and boundries between blocks of color. I saw a wonderful lecture given by Dr. Land where he projected a bright circular spot light on a board and asked what color the object was. The audience all answered "white". He then removed the mask from the spot and, with the same lighting over the whole board, we saw that the light had been illuminating a red circle on a board with various colored polka-dots.


jhustead ( ) posted Mon, 12 February 2007 at 4:56 PM · edited Mon, 12 February 2007 at 5:02 PM

Robo2010 said> Quote - Animating is my dream, but I haven't even done so yet due to poser(6) issues. And the cost to go to school here is tremendous. Only my avatar is the furthest for animating. 

 

Are you a student in school/college? If so most companies usually offer student discounts. For instance since I'm in school I bought Poser 6 for $135 and I also bought Lightwave 3D 8.5 for $195. Right now at the same store I made my purchases at they’re offering 3D Studio Max 8 for $179. The website is: http://www.academicsuperstore.com/

 

-James


DarkEdge ( ) posted Mon, 12 February 2007 at 4:59 PM

hey, if you find a renderer that has that one button function...please let me know.
i have to do so much post work to make anything believable, it's not even a render anymore! lol!

Comitted to excellence through art.


Keith ( ) posted Mon, 12 February 2007 at 5:33 PM

Quote -
Ok, now take a better look at the values, the difference between white(255) and grey(128) is the intensity of light, so if the difference is the intensity if we illuminate with more intense light a grey object it will become white.
Let go to practice, look in a room with closed windows, you will see grey objects unless is very dark to see anything, now open all the windows and let the sun light enter, what was a grey object continue to be grey and what was white continue to be white even the illumination in the room increased one hundred times!
Some variable is missing!

Yes, your rods and cones.

A red object (or a white one or a blue one) appears gray in the dark not because of anything about them but because of something about you.  In low light humans have very poor colour vision, thus things appear to us as grayscale.  When you illuminate the room you increase the intensity of reflected light which provides sufficient photons to fire the colour-sensitive portions of the eye.



fls13 ( ) posted Mon, 12 February 2007 at 9:40 PM

There's a lot of very good freeware raytrace renderers out there. POVray is the one I use. It lights pretty easy. Yeah, it's a pain to export but the lighting is so much easier and the results so much better, it's worth the extra effort.


kawecki ( ) posted Mon, 12 February 2007 at 11:31 PM

There are no grey lights in real life, in Poser do exist.
If you illuminate a white sheet of paper you can dim the light and sheet will not turn grey, it will continue to be white until the moment that all become so dark that you see nothing.
In physics exist a difference between a white and a grey object. A white object is one that reflects 100% of the incoming light energy, a black object is one that absorve 100% of the incoming energy reflecting 0%. A grey object is something in the middle of a white and a black object, so it absorve something and reflect something.
If an object is white or grey is defined by the absortion coefficient and do not depend on the illumination intensity.
In the RGB illumination model if something is grey or white depend on the illumination intensity.
In some way that I cannot explain, human eyes follow the physical model and not the RGB illumination model, and this has nothing to do with rods and cones.
If you know the incomming energy and measure the reflected energy, you are able to find the absortion coefficient and so, know how grey is an object.
Our eyes receive the reflected energy, so we know its intensity, but how our brain gets the missing parameter that is the incoming energy to complete the calculation???????

The difference between rendering and real life is not the complexity and the number of variables, some real life scenes can be very simple with only one light source, a simple geometry and texturing, so it is nothing difficult to model and render the scene. You can use 3dsMax, Maya, a ltt of shaders and whatever you want, but when your eyes looks at the rendered scene will find that is artificial even it is technical and mathematical perfect.
The difference between real life and rendering is some missing parameters that just make this difference.

Stupidity also evolves!


steve1950 ( ) posted Tue, 13 February 2007 at 12:57 AM

Quote - ....In the RGB illumination model if something is grey or white depend on the illumination intensity.
In some way that I cannot explain, human eyes follow the physical model and not the RGB illumination model, and this has nothing to do with rods and cones.
If you know the incomming energy and measure the reflected energy, you are able to find the absortion coefficient and so, know how grey is an object.
Our eyes receive the reflected energy, so we know its intensity, but how our brain gets the missing parameter that is the incoming energy to complete the calculation???????

 

The missing parameter is called "White Balance". A lot of cameras can be adjusted for this but your brain does it automatically.

In your example, a piece of white paper in 50% illumination is reflecting the same amount of light as a 50% gray sheet in 100% light. Also the colour balance of the light can change and the paper would appear to change color.

The magic bit is that your brain knows that the paper is white and readjusts the illumination and colour balance so that the paper still looks white. A camera can do this using exposure controls and white balance (you tell the camera what is white and it adjusts all the other colours accordingly)


kawecki ( ) posted Tue, 13 February 2007 at 3:19 AM

That is the question!, how my mind picks the missing information. If the camera would be able to pick the same information I would not need to tell it.

I had an experience even in a different area many years ago.
I was trying to synthetize a piano sound, a lot of theories, harmonic analyzes of a piano sound, temptatives and whatever I tried it didn't sounded as a piano, so I desisted.
Many years later I was working in making sounds for a pinball machine, this time it was not music, it was noise. I used a Z80 microprocessor and experimented many ideas and digital algorithms for generating noises and FX.
The machine needed 10 to 20 sounds, so I invented, experimented and what was useful I added to the machine.
In one of my experiments with white noise and shift registers I found something that had the sound of a piano, but it was not music, it was noise!!!
The sound was of a bad piano, an untuned piano, but everyone was able to recognise the sound of a piano.
In resume I found something that defined for human ears the sound of a piano even lacked of any melodical content.
In resume, there exist some parameters that makes the difference between what is artificial and what is real. If you find the right parameters the result can be bad, of poor quality but it looks real.

In Poser I don't use painted textures, for much better, bigger size and quality have painted textures any texture taken from a photograph produce much better result.
And some photographic textures that I have are small and of bad quality if you look at it, but once applied to a model and rendered the result is excellent. (It doesn't mean that the textures must have bad quality, only that I was not able to find the same with better quality).
There is something in photographic textures that nobody is able to paint or generate.
Also there is a technical clue that I was not able to decode yet, they compress much better!

Stupidity also evolves!


pisaacs ( ) posted Tue, 13 February 2007 at 5:20 AM

"The difference between real life and rendering is some missing parameters that just make this difference."

Also, eyes are in constant motion when looking at the surroundings, with things going in and out of focus and your memory patching it all together. It's one of the things that makes photographs and digital renders so different from real seeing, even though they have their own beauty of style and substance.


ccotwist3D ( ) posted Tue, 13 February 2007 at 10:53 AM

Does poser 7 support normals mapping? If so, the polycount of the model isn't that important.


XENOPHONZ ( ) posted Tue, 13 February 2007 at 11:18 AM

One-click photorealism is, no doubt, coming.  It just isn't here yet.  And probably won't be for several more years.

I believe that the day is coming when you'll be able to create digital clones of anyone -- and then put them into movies which you've "directed" on your own PC.  But we aren't there yet.  At least not without high-end apps and mega-powerful machines.  Which puts it well out of the reach of the vast majority of people -- for now.

Yes -- there might someday be a new cottage industry of truly "indie" films for sale/download on the internet -- films which have no human actors at all.  But they look like they're real.

Will it ever be possible for a software investment of $200?  I doubt it -- but who knows?

Something To Do At 3:00AM 



XENOPHONZ ( ) posted Tue, 13 February 2007 at 11:20 AM

Then again -- perhaps the world will explode long before any of this becomes possible.

Something To Do At 3:00AM 



tekmonk ( ) posted Tue, 13 February 2007 at 12:05 PM

Quote - Does poser 7 support normals mapping? If so, the polycount of the model isn't that important.

You dont need it in poser... Since firefly is a REYES renderer, it does displacement mapping pretty fast. So you can use that instead of normal maps, and have it look better too.

Quote - I believe that the day is coming when you'll be able to create digital clones of anyone -

Yep, we are getting quite close. The davey jones character was a fully CG human in most of that Pirates movie. Also that shot in the new Superman movie where they resurrected brando (though briefly) Digital stunt doubles are already quite common, all the spiderman movies, xmen, matrix movies etc all used them to good effect. FaceRobot and that auto facial anim capture system they were showing at last SIGGRAPH are also quite cool and fast. As is that human skin recreation system that someone posted here a few weeks back. Now all that is needed is for the tech to become cheaper/more common and hardware support.

I believe the 'put your face on your game character' thing in some of the new XBOX 360 and Wii games is a good step in this direction as well.


XENOPHONZ ( ) posted Tue, 13 February 2007 at 12:55 PM

Quote - Also that shot in the new Superman movie where they resurrected brando (though briefly)

 

Yes -- death will no longer be an impediment to an acting career.  We'll have a whole new set of Fred Astaire/Ginger Rogers movies.  Clark Gable -- Rita Hayworth.  Even River Phoenix.  Bring 'em all back.

(Actually, the idea sounds kind of morbid to me.)

It'd be interesting to see a "reality" TV show -- created with people no longer around.

Something To Do At 3:00AM 



Tiari ( ) posted Tue, 13 February 2007 at 1:26 PM

I think we don't have it "yet", simply because in the beginning, the Genesis of Poser was simply for what it says.  Posing.   An artist tool to help in the stead of a live model, and to be painted from, not used as its own medium.

As time passes, as we see, this has changed........ a LOT, but Poser still has its limitations.  It will take several more years in development I think, before it is completely geared to  a render only market.

Its come a long way baby, considering in recent times some phenomenal creators have made near photo quality stuff out of the program.   But, for under a 300 dollar mark purchase price, there really is only so far you can go.


lkendall ( ) posted Tue, 13 February 2007 at 2:12 PM

2/13/07

XENOPHONZ:

"It'd be interesting to see a "reality" TV show -- created with people no longer around."

Good one! I thought that was already the formula for "reality" TV, to use people who have never been or are no longer around.

On topic:

The development of software sophistication and power has never kept up with the increasing power and capabilities of computer hardware. Mainly this is because programmers take advantage of most if not all of the extra power to decrease the time needed to write, debug, and beta test software products. The programs of yesterday bloat to fill the added memory and storage of today, while slowing and choking the more powerful processors with a glut of code, processes, and unused functions. It is like having the fastest car in the world and hitching a load to it that is so heavy that it makes the car crawl at 10 miles an hour.

LMK

Probably edited for spelling, grammer, punctuation, or typos.


ccotwist3D ( ) posted Tue, 13 February 2007 at 5:26 PM · edited Tue, 13 February 2007 at 5:28 PM

Quote - > Quote - Does poser 7 support normals mapping? If so, the polycount of the model isn't that important.

You dont need it in poser... Since firefly is a REYES renderer, it does displacement mapping pretty fast. So you can use that instead of normal maps, and have it look better too.

Quote - I believe that the day is coming when you'll be able to create digital clones of anyone -

Yep, we are getting quite close. The davey jones character was a fully CG human in most of that Pirates movie. Also that shot in the new Superman movie where they resurrected brando (though briefly) Digital stunt doubles are already quite common, all the spiderman movies, xmen, matrix movies etc all used them to good effect. FaceRobot and that auto facial anim capture system they were showing at last SIGGRAPH are also quite cool and fast. As is that human skin recreation system that someone posted here a few weeks back. Now all that is needed is for the tech to become cheaper/more common and hardware support.

I believe the 'put your face on your game character' thing in some of the new XBOX 360 and Wii games is a good step in this direction as well.

Displacement mapping is ideal, but most people who create textures here seem to only create bump maps and and treat them as displacement maps using the displacement node. That's not really using displacement mapping, and won't enable you to add the sort of detail you can attain using Z-brush on lower poly proxies or models. A combination of normal, bump, and displacement map nodes couldn't hurt.


Teyon ( ) posted Tue, 13 February 2007 at 8:24 PM

Honestly, while I do understand that it'd be nice to achieve photrealism now and again, I have to wonder at why everyone thinks this is the holy grail of 3D? We should try venturing outside of what we know, create new and interesting environments and creatures. Stylization and believability is what made the Incredibles...well...Incredible. They weren't  photoreal. Same goes for Monster House and Toy Story and...well, you get the idea.  Photoealism is an admirable quest but it's not the be all and end all of 3D in my book. As far as I'm concerned, if you can create something that draws me in, regardless of how "real" something is, it's a better image for it. 

So...where are the surreal? Where's the style? That's the question I'd be asking (and hope to answer soon).


tekmonk ( ) posted Tue, 13 February 2007 at 9:22 PM

Quote - Displacement mapping is ideal, but most people who create textures here seem to only create bump maps and and treat them as displacement maps using the displacement node. That's not really using displacement mapping, and won't enable you to add the sort of detail you can attain using Z-brush on lower poly proxies or models. A combination of normal, bump, and displacement map nodes couldn't hurt.

True, but that's a limitation of how merchants are still stuck with supporting poser 4, not a limit of the software itself. Normal maps, like bumps are a 'fake' to simulate actual geometry on renderers that cant handle the performance hit of displacement. Firefly doesn't have this problem so there is no need to fake it, just use displacement itself, it may even render faster then bumps.

Quote - Honestly, while I do understand that it'd be nice to achieve photrealism now and again, I have to wonder at why everyone thinks this is the holy grail of 3D?

A talented modeler as yourself should  understand this better then anyone... Could you have gotten as good as you are without first understanding the human body as it is in reality ? All the countless studies you must have done, painstakingly recreating the shapes and the planes, all to learn and internalise the anatomy.. The reason why the toonish work (for example) in the Incredibles looks so authentic is that it also has this sort of solid base. Or in other words, you have to first know the rules before you can break them. Photorealism right now is the 'rule' that we as CG artists, are trying to understand. The surrealists, the abstracts and all that will come once the 'real' is understood to a degree that allows people to use it without the technical hurdles.

Not to mention that there are lots of benefits from a VFX viewpoint in having perfect photorealism. Actor salaries and temperaments are becoming increasingly nutty these days. As are the imaginations of directors/scripts and the hunger of the audience to see cooler and cooler stuff. All of which can means that CG photorealism fills a very tangible need in the industry. How else would you depict a half human, half squid undead pirate :)


Teyon ( ) posted Tue, 13 February 2007 at 9:38 PM

Point! NARF! Excellent response.  I must bow to the fact that it is important to know what you're breaking before you break it. That is key, in fact. I shall consider myself properly thrashed. :)

(oh, and thanks for the nod in there. I've still much to learn though)


lmacken ( ) posted Wed, 14 February 2007 at 12:10 AM

Cold hearted orb that rules the night,
Removes the colours from our sight,
Red is gray and yellow white,
But we decide which is right,
And which is an illusion.

The Moody Blues - Days Of Future Passed


kawecki ( ) posted Wed, 14 February 2007 at 4:23 AM

Aummmmmm

Stupidity also evolves!


AnAardvark ( ) posted Wed, 14 February 2007 at 11:52 AM

Quote - Honestly, while I do understand that it'd be nice to achieve photrealism now and again, I have to wonder at why everyone thinks this is the holy grail of 3D? We should try venturing outside of what we know, create new and interesting environments and creatures. Stylization and believability is what made the Incredibles...well...Incredible. They weren't  photoreal. Same goes for Monster House and Toy Story and...well, you get the idea.  Photoealism is an admirable quest but it's not the be all and end all of 3D in my book. As far as I'm concerned, if you can create something that draws me in, regardless of how "real" something is, it's a better image for it. 

So...where are the surreal? Where's the style? That's the question I'd be asking (and hope to answer soon).

 

Well, there is also sureal realism. The way that the monsters moved in Monsters Inc. was so realistic that at one point in the movie (in the large workroom) I found myself wondering which parts were set and which parts were a CGI background :)


Privacy Notice

This site uses cookies to deliver the best experience. Our own cookies make user accounts and other features possible. Third-party cookies are used to display relevant ads and to analyze how Renderosity is used. By using our site, you acknowledge that you have read and understood our Terms of Service, including our Cookie Policy and our Privacy Policy.