Mon, Nov 25, 7:05 PM CST

Renderosity Forums / Poser - OFFICIAL



Welcome to the Poser - OFFICIAL Forum

Forum Coordinators: RedPhantom

Poser - OFFICIAL F.A.Q (Last Updated: 2024 Nov 25 12:38 pm)



Subject: surface imperfection material shader


iikuuk ( ) posted Mon, 06 June 2005 at 2:40 AM · edited Sun, 24 November 2024 at 9:26 PM

file_251859.jpg

Hi I was about to make it perfect, but now i see i need some help with a shader node. I was trying to do a surface imperfection node, which is somehow shows areas on the object's geometry where its bends(convex) more then 'normal'. The problems i found: - you cannot use du dv nodes (bugreport already sent) in P6 - dPdv and dPdu nodes show only the direction where the geometry tends, but nothing about if it changes. - dNdu and dNdv nodes contain the information we need, but since these dont change inside one vertex, you get only semi results, which should be fed into a further shader (i.e. dNdu and dNdv will be the same in one vertex). I would be interested if anyone can show me a nice way to 'smudge'/blur any result in a P6 shader... (see attached shader node) From the shader: The way it works is pretty easy, it adds the length of the partial differentials of the normal vector, then transfers the values into a 0-1 interval. Last it checks if thats over a predefinied value. Although im a mathematician (kinf of), i didnt look too much for the correct equation, but if you have some advice, im defenitly curious.


iikuuk ( ) posted Mon, 06 June 2005 at 2:42 AM

file_251861.jpg

Same shader step->diffuse render


iikuuk ( ) posted Mon, 06 June 2005 at 2:42 AM

file_251862.jpg

smoothstep->basic colorramp


iikuuk ( ) posted Mon, 06 June 2005 at 2:43 AM

file_251863.jpg

original sh node


iikuuk ( ) posted Mon, 06 June 2005 at 2:43 AM

file_251864.jpg

high quality render original node


iikuuk ( ) posted Mon, 06 June 2005 at 2:45 AM

file_251865.jpg

'The Corroded Four'.


Ajax ( ) posted Mon, 06 June 2005 at 4:50 AM

Niiiiice work! This is excellent. [and you're welcome :-) ] How did you choose the number 50 in the transform step? Was it just a matter of trying different values until you found one that gave the right effect?


View Ajax's Gallery - View Ajax's Freestuff - View Ajax's Store - Send Ajax a message


iikuuk ( ) posted Mon, 06 June 2005 at 4:54 AM

Well, thats something like 'how far is far' value, but you really have to try the real value, because it depends on the size of the facets, the point of view, and the geometry. For normal human poses and polygons the 50 was a nice value.


stonemason ( ) posted Mon, 06 June 2005 at 5:33 AM

very cool! :)

Cg Society Portfolio


Mec4D ( ) posted Mon, 06 June 2005 at 7:10 AM

Yes very nice work! top idea

_________________________________________________________

"Surrender to what it is - Let go of what was - Have faith in what will be "


dlfurman ( ) posted Mon, 06 June 2005 at 10:55 AM

Whoa! Very nice work!

"Few are agreeable in conversation, because each thinks more of what he intends to say than that of what others are saying, and listens no more when he himself has a chance to speak." - Francois de la Rochefoucauld

Intel Core i7 920, 24GB RAM, GeForce GTX 1050 4GB video, 6TB HDD space
Poser 12: Inches (Poser(PC) user since 1 and the floppies/manual to prove it!)


Rothrock ( ) posted Mon, 06 June 2005 at 3:38 PM

Does anybody know a good tutorial that explains how to "read" a shader like this? I did the real skin shader tutorial and what is going on makes sense to me. (Okay, not perfect sense, but enough!) But I still am not quite seeing how to convert mathematical ideas into this kind of node language. Or how to go the other way to run into something like this and be able to come up with an idea of what it is doing.


face_off ( ) posted Mon, 06 June 2005 at 4:36 PM

This is exceptional! I wonder it a similar concept could be done for skin (but in reverse - so the more "imperfect", the higher the red content). I need to check out these new P6 nodes you are using. You are right....these things can't be blurred - one pixel on the render can't get material room info on another pixel (not through nodes anyway). But some ideas....in this case you've got obtuse (is that the right term) angles that you are trying to effect the color of. /If/ they were accute, you could use an AO node to determine if other polys were in the vacinity (use it in conjuction with the diffuse node). Also, have you tried using the gather node. It has a parameter to gather from a specified angle - if you plug in 360degrees it might be a way for you to detect what the nearby polys are doing.

Creator of PoserPhysics
Creator of OctaneRender for Poser
Blog
Facebook


Ajax ( ) posted Mon, 06 June 2005 at 4:45 PM

You probably need to know a little about vector calculus for this one, which is something that's rarely learned outside a maths or physics degree. The first thing is that dNdv and dNdu are 3 dimensional vectors representing the amount the normal vector changes as you travel accross the surface of the model in the v and u directions respectively (those are the same u and v you get in a UV map). In order to figure out how strongly curved the surface is at any point, we want to know how long those two vectors are. A 3D vector is usually expressed as a set of 3 values like this: "(x,y,z)" or "(component0, component1, component2)". To find out how long the vector is, we get the square root of the sum of the square of each of the three components: Sqrt(comp0^2 + comp1^2 + comp2^2) That's what iikuuk is doing in those two top right columns of nodes. The component funtions extract those three values, the power functions square each of them, then they are added together. The top two nodes over on the left get the square roots (two, since there are two vectors). At this point we have a measure of the curvature of the surface, but we really want a measure that's always going to be between 0 and 1, rather than a measure that could conceivably be any positive value, so we have to do a little trick to convert our measure to a measure between 0 and 1. What iikuuk does with the next couple of nodes is this: newmeasure=oldmeasure/(oldmeasure+50) This is always smaller than 1 and could be as small as zero but no smaller. The rest is really just about choosing a suitable cutt off point - how big does your measure need to be before you decide to chip the paint off? A few general observations: * This relies on having a good UV mapping. It needs to be a low distortion mapping to work well. * This won't work on models where the vetices have been split to create hard edges. You need a model with continuous, unbroken mesh across those sharp edges. * You can cut down the number of nodes used by adding dNdv and dNdu together (use a color math node rather than a math node) and calculating the magnitude of the result rather than adding the magnitudes of the two separately. It'll give slightly different results but does essentially the same thing and is arguably a more "correct" approach.


View Ajax's Gallery - View Ajax's Freestuff - View Ajax's Store - Send Ajax a message


Rothrock ( ) posted Mon, 06 June 2005 at 7:42 PM

Ajax thank you for that. I actually do understand vectors and calculus I need little refreshers now and then, but in general I get it. So your walk through is great. I'm guessing I just need more practice to be able to "read" it the way you did. I haven't ever really figured out about UV. When I look at a flattened out UV map, I'm guessing that one of those is equivalent to X and the other to Y, but since each of them will really be wrapped around in space "they" gave them different designations to avoid confusion? Again, I really appreciate the guidance. If you know of any further places where I can get into the math a bit it might really help me understand what is going on.


odf ( ) posted Mon, 06 June 2005 at 10:10 PM
Online Now!

This is outstanding. Great idea! Now we need someone to write a Python program for converting formulae into shader networks and vice versa. Oh, or maybe a math shader node which accepts a formula. HINT - HINT - SR2 - HINT - HINT :-)

-- I'm not mad at you, just Westphalian.


iikuuk ( ) posted Tue, 07 June 2005 at 2:43 AM

Thanx and welcome :) I hoped someone with a more decent english knowledge will do some tutorial from the pure nodes i made ;) And yes, Ajax was right about the restrictions of the geometry , but as always there is no general solution for a shader problem like this; since only a geometry built up from fractals would be perfect(which is mostly true for the real world(...)). About the last adviced restriction, im not quite sure if the difference will be just a small amount. I have to look at it more since i didnt think about a shorcut like that, but as far as i feel know it wont be a better result, but will be quicker by far. Since its a bit more comlex than just say something vaguely, ill check and let you know about the result. (And yes that requires more from those nasty vector calculus.)


obm890 ( ) posted Tue, 07 June 2005 at 5:41 AM

Wow! Very interesting indeed! Thank you iikuuk for posting this, and thank you Ajax for the illuminating commentary. I re-created this network just to see if I could get my head around what it is actually doing (alas, my head doesn't seem to stretch that far). That was before Ajax posted the explanation which helps a great deal. There's something truly exciting about feeling way out of one's depth but at the same time hungry to learn. I felt that way about 3D modelling, then about mapping and rendering, then figure creation, and now about this stuff. And the learning is made so much easier by people like you who share your mastery so generously here on the internet - "the university of the third age". Thank you Ob



iikuuk ( ) posted Tue, 07 June 2005 at 6:15 AM

LOL i dont consider myself as a guru :) but thanx. (if it was to Ajax, i agree :) Ok, back from the math papers, if you do add dNdv and dNdu AND do the calculation afterward you will not have the right result. The reason is pretty ease, if you add those vectors, its like you transform one to the other, hence the lenght will be smaller (or equal), then the original |dNdu|+|dNdv|. Furthermore if dNdu and dNdv points absolutely in the opposite direction (and thats not uncommon), the proposed addition will show a small value, which will not reflect the change of the surface. If only you could do a "normals forward" on_that_node, or call an abs function of the vector's axis', would you get 'similar' values, but even then there would be a difference (and would lose its extra speed).


stewer ( ) posted Tue, 07 June 2005 at 11:10 AM

"You are right....these things can't be blurred - one pixel on the render can't get material room info on another pixel (not through nodes anyway)." That's a feature (!) of the REYES rendering algorithm (the one that FireFly, PRMan or 3Delight are based on). The benefit is that since no shading point depends on other shading points, they can be calculated in parallel on vector computers or SIMD instruction sets and that the renderer can discard things it already rendered from memory (because it won't need them again).


Ajax ( ) posted Tue, 07 June 2005 at 4:59 PM

Thinking it over again, I agree with you iikuuk. It's better to keep the two partial derivatives separate and add their magnitudes afterwards.


View Ajax's Gallery - View Ajax's Freestuff - View Ajax's Store - Send Ajax a message


Privacy Notice

This site uses cookies to deliver the best experience. Our own cookies make user accounts and other features possible. Third-party cookies are used to display relevant ads and to analyze how Renderosity is used. By using our site, you acknowledge that you have read and understood our Terms of Service, including our Cookie Policy and our Privacy Policy.