Forum Coordinators: RedPhantom
Poser - OFFICIAL F.A.Q (Last Updated: 2024 Dec 28 9:33 pm)
The final output can be used in the same way an ordinary mask image would be such as driving a Blender nodes Blending input.
Combining to colour subtraction technique and the colour seperation technique will give more variations of what can be done with a single image.
These techniques will work in P5 and P6, though for P5 it's going to be all manual work, for P6 the new Material Room API for Python will allow scripts to do a most of the "heavy lifting" in terms of adding the nodes required to seperate out a particular mask.
Nice technique. How about using the colour ramp node to get three masks out of a greyscale image? Draw one mask in white, another in 66% grey and the third in 33% grey. Set up your colour ramp with three blacks and one white colour - white at the top would mask the white, white in slot two would do the 66% grey and white in slot three would do 33% grey. Not as powerful as your colour technique, nor would it allow as many masks, but it might be handy
Attached Link: http://www.renderosity.com/tut.ez?Form.ViewPages=910
Incidently, I've just had my first tutorial approved, and by a coincidence it's about using colour maths, in the this case to adjust texture coloursIf you mix and match techniques in the right way I think you could get several grey scale maps out of one image.
Any masks generated are meant to be used as inputs to other Shader networks to control effects.
These techniques work best on figures that have a single continuous material (i.e. no material zones)
Using color separation would allow two masks to fade into one another, given a careful choice of colours.
Anyone who is familiar with Terragen will understand how multiple layers of masks can be used to produce amazing effects.
I used a similar technique in an image using the Andromeda IV android texture. I wanted to make certain colored bits light up with ambient light, so I created the mask procedurally much like you do here, by selective color, and then I used that mask to bring the color back in and applied it to the ambient channel. As for rendering time, I've generally found that complex node arrangments make a much smaller difference than adding extra geometry, and in tight memory conditions, they are much better than a real texture.
I did some timing tests with nodes versus textures, and nodes were significantly slower than textures - on a simple cube each extra node added pushed the time up by 2 to 3 seconds; textures of any side made little or no difference. Memory wise, textures at 1000x1000 seemed to take the same amount of memory as the nodes, but at any other size the textures took much much more - 2 meg per 1000x1000 texture, 30 meg per 4000x4000 texture; I must experiment with geometry instead - I've got a displacement maped column that would make the perfect text
I hadn't really done any scientific timing tests, just general impressions formed while rending one scene over and over. Part of the image was a background with a complex procedural material (a starfield made over several fractal patterns used together) while other parts of the image included a V3 and a Maddie. While passing over the procedural background, the render just flew (multiple buckets per second), but it crawled on the characters (multiple seconds per bucket). That's why I said nodes were faster than geometry. Also, in this scene I was trying to render at very high resolution (4500x4500), and a texture of the right size for the background pushed me beyond the memory limits that Firefly could handle. Considering that the render took hours, I was more than happy to sacrifice a few extra seconds or minutes on the procedural texture in order to have the memory to complete the render.
Here you see an even simpler setup to give you the effect. Note that doing a Math Function->Subtract node lets you get the inverse mask at runtime. From there you can drive the blender node directly to mix the two inputs.
Is that the effect you were looking for?
This site uses cookies to deliver the best experience. Our own cookies make user accounts and other features possible. Third-party cookies are used to display relevant ads and to analyze how Renderosity is used. By using our site, you acknowledge that you have read and understood our Terms of Service, including our Cookie Policy and our Privacy Policy.
The recent interest in procedural skin effects and the need for maps to control where effects get applied got me thinking how it might be possible to reduce the number of maps needed to include several effects at the same time.
The posts that follow show the basics of what I came up with.