Forum Coordinators: RedPhantom
Poser - OFFICIAL F.A.Q (Last Updated: 2025 Jan 26 2:05 pm)
Monkeycloud, you’re preaching to the choir. I described something very similar in one of the previous discussions – not claiming precedence, just great minds working similarly ÷)
Robyn, I’m more interested in the UI, so its nice to see more on improvements at a lower level. Hopefully, you’ll get the tools you need. Having the ability to create shader objects (with inheritance maybe) would be a great boon to creators. There’s nothing wrong with the node based approach – it’s popular. It is, IMO, not the ideal interface for everyone and its not universal. I don’t know what C4D uses now, but they used to have a more channel based approach and of course, as mentioned, Vue has a different metaphor nodes as a more advanced interface. Some of those approaches are probably more accessible for people used to channels, filters, layers etc. from other graphics applications. The skin gurus can work their magic and create presets that are as simple or as elaborate as they want. Then expose properties like sheen, ageing, blemishes as desired with the ability to use distribution maps to localize the effects as desired.
A basic selection might be freckles on or off. Turning them on could activate selections for color, size, frequency and the distribution map where you could e.g. limit freckles to the nose area. The same thing could work for scars, moles etc, Maybe by using different colors, you could have different features on the same map, though the ability to use layered maps would be preferable. If someone has a great tattoo setup, the only properties exposed might be age and the maps to actually define the tattoo and its location.
“I think some things lack guidance and people want to learn and that's what is frustrating. Am I off base?”
Yes and no IMO. More ‘recipes’ is a great idea. OTOH, there are people who don’t want to learn to cook, and if those people make up any sizeable portion of your customers … Don’t make the mistake of thinking it’s all or nothing. Presets could expose as much or a little of their underlying complexity as desired - some might want to ‘hide’ their ‘proprietary’ work. People could annotate their ‘dishes’ and explain what each ingredient did or people could start playing with them. The more choices, the better. I can buy a cake, I can buy cake mix and I can but raw ingredients. There’s nothing that prevents me from changing any of those, like topping the cake with ice cream, adding nuts to the mix or substituting raw ingredients. Better cookbooks are fine but it isn’t pioneer times, to stretch the culinary metaphor. Not everyone has the time, or the inclination to use them. Who knows how many chefs these days started out with microwave popcorn.
"Democracy is a pathetic belief in the collective wisdom of individual ignorance." - H. L. Mencken
Quote - ... Robyn, I’m more interested in the UI, so its nice to see more on improvements at a lower level. Hopefully, you’ll get the tools you need. Having the ability to create shader objects (with inheritance maybe) would be a great boon to creators. There’s nothing wrong with the node based approach – it’s popular.
Running with your idea... the node-group/node-object concept would lend itself beautifully to your improved UI idea. Indeed, I really think a proper library (EZSkin and other core materials could all make up this library) where the exposed end - the Simple Material Room bit - could be a few tickboxes that, say for skin, could give you freckles or change the skin tone or add water droplets or-or-or... the interface would "expose" a core library node set that would be instanced by a drop-down selection. Saves a lot of re-inventing the wheel.
And users could come up with their own libraries/objects, which they or their customers would interact with through that same interface.
Sure, this might require a bit of a re-write of the mat room, though. I'm in process of learning C++ and (subsequently) Qt4, but I doubt I'll be at a point where I can contribute to the code base before Poser 2045... :lol:
Monterey/Mint21.x/Win10 - Blender3.x - PP11.3(cm) - Musescore3.6.2
Wir sind gewohnt, daß die Menschen verhöhnen was sie nicht verstehen
[it is clear that humans have contempt for that which they do not understand]
Yeah, the sploches are keeping me from using emitters.
WARK!
Thus Spoketh Winterclaw: a blog about a Winterclaw who speaks from time to time.
(using Poser Pro 2014 SR3, on 64 bit Win 7, poser units are inches.)
The whole diffuse strength <-> IDL Intensity balance needs adjusted as I'd latterly understood what BB to be saying in that thread back in November / December?
i.e. why everyone started experimenting with much lower IDL intensity... e.g. down as far as 1.5, to compensate for the fact the diffuse channel was blowing out?
Maybe doesn't need adjusted as much as the defaults need fixed?
A global / master diffuse level setting in the render settings might be useful too?
Quote - I think there would be less complaints about the Material Room if Poser came with a library of decent materials.
The current included material collection varies considerably in quality, some are pretty good, some are just about OK, some are terrible. This inconsistency puts me off trying them out.
Okay, I agree with that.
lmk
Probably edited for spelling, grammer, punctuation, or typos.
Quote - I think there would be less complaints about the Material Room if Poser came with a library of decent materials. The current included material collection varies considerably in quality, some are pretty good, some are just about OK, some are terrible. This inconsistency puts me off trying them out.
As I see it, the main issue with the material room is that, for 99% of users, the current material room setup is an enigmatic obstacle to material-making. I'm even talking vendors, here. There actually are some pre-set materials that come with Poser, but these are squirreled away where most people don't even see them, and besides, a lot of these don't work properly with all the lighting currently available. Try it: load one on a cube, then turn on renderer gamma correction and render using your favourite lights. I won't say more.
What most people don't realise is that, with the new lighting models, the elaborate 'solutions' from yesterday's Poser aren't only superfluous, they're actually wrong. They were kludges to compensate for something that now exists in the light models. So, simple materials - i.e., a texture map plugged into Diffuse(.85) and this added to a Blinn() node, then going into the Alt_Diffuse channel of PoserSurface() - look better and render better than highly involved node-set materials. However, this doesn't appear to be common knowledge. A front-and-centre library of core materials and lighting to go with those materials would go a long way to allowing all but the most technical to achieve decent renders quickly... and those who are technical-minded can still make elaborate materials that will look good in any of the lighting avaible in the current Poser.
For the whiz-kids, the material room is fine: for those of us who aren't whiz-kids, it's either too limited ('SImple' tab) or way too inscrutable ('Advanced' tab).
The key is making sure your lighting and materials work together. One is inexorably tied to the other. Somehow, this needs to be made clear in the interface, somehow, perhaps with lighting defaults and material defaults that play well together. And since the program is called "Poser" and what gets posed most frequently are humanoids with typical skin, perhaps lighting and an EZSkin-type material could be set as a default for figures.
Maybe? :blink:
Monterey/Mint21.x/Win10 - Blender3.x - PP11.3(cm) - Musescore3.6.2
Wir sind gewohnt, daß die Menschen verhöhnen was sie nicht verstehen
[it is clear that humans have contempt for that which they do not understand]
CaptainMARC.... good point. Perhaps time for those who consider themselves material room gurus to get to work?
Artwork and 3DToons items, create the perfect place for you toon and other figures!
http://www.renderosity.com/mod/bcs/index.php?vendor=23722
Due to the childish TOS changes, I'm not allowed to link to my other products outside of Rendo anymore :(
Food for thought.....
https://www.youtube.com/watch?v=pYZw0dfLmLk
The Poser Material Objects (PMO) would be based on one (or more) node groups (MetaNodes)? You would probably have some sort of standard interface for the material objects – name, category, brief description, logo etc. for display in the materials catalog. A PMO could be a simple preset or expose properties for customization. A simple UI editor would be nice, something on the order of the forms editor in Office. I suppose Python could do all of that. Each object would define its UI elements, related properties and methods for driving the underlying node setup. Maybe the PMO’s could be combined in some way ala Vue’s mixed materials or a layer metaphor so one could (simply) add the displacement output of a reptile skin PMO to freckled skin in another PMO with – without getting into node mode.
"Democracy is a pathetic belief in the collective wisdom of individual ignorance." - H. L. Mencken
To cut down on repetition/duplication/error/bloat, some sort of Object model for NodeSets ( or 'shaders', like for Skin or Sea-Water or Hair or...) would be the go, and would work well with your mechanism, lmckensie. Going to an Object approach (vs copies of node groups) might require a re-write, I should think.
Monterey/Mint21.x/Win10 - Blender3.x - PP11.3(cm) - Musescore3.6.2
Wir sind gewohnt, daß die Menschen verhöhnen was sie nicht verstehen
[it is clear that humans have contempt for that which they do not understand]
Quote - “I really think a proper library (EZSkin and other core materials could all make up this library) where the exposed end - the Simple Material Room bit - could be a few tickboxes that, say for skin, could give you freckles or change the skin tone or add water droplets or-or-or... the interface would "expose" a core library node set that would be instanced by a drop-down selection.”
If the material room looked MORE like that they'd be getting somewhere. Can't make absolute garbage and can see what you're getting as you do it. Rarely do you see what ur gonna get in Poser's crappy material preview.
Laurie
Good point: with an "interface approach", a lot of erroneous node mess can be prevented. Less choice, herhaps for the techies, but a lot more satisfied artists in the long run. This idea has tremendous merit!
Monterey/Mint21.x/Win10 - Blender3.x - PP11.3(cm) - Musescore3.6.2
Wir sind gewohnt, daß die Menschen verhöhnen was sie nicht verstehen
[it is clear that humans have contempt for that which they do not understand]
PhilC added his make art button to one of his scripts.
Time for BB to do the same and create a series of Wacro's for Gold, Silver, Chrome, Brushed stainless..... Various cloth materials, water, blood, smoke and wood-oak, walnut, pine, with and without a clear coat---you get the picture....that are included with the next verison of Poser and built into the interface of the materail room. :ohmy:
Gary
"Those who lose themselves in a passion lose less than those who lose their passion"
Quote - PhilC added his make art button to one of his scripts.
Time for BB to do the same and create a series of Wacro's for Gold, Silver, Chrome, Brushed stainless..... Various cloth materials, water, blood, smoke and wood-oak, walnut, pine, with and without a clear coat---you get the picture....that are included with the next verison of Poser and built into the interface of the materail room. :ohmy:
I was kinda told, in another material room rant, that presets (I'm all for 'em) are tantamount to cheating, even tho just about every 3D program I use has em ;).
Robyn: I like the idea of an interface like that....but still have the node structure hidden behind it, much like Vue does, so that the people that like to play with that sort of thing, can ;). I think a lot of us will be satisfied with an intelligent, but not crippled interface similar to the above. Maybe some filter profiles, altitude sliders, hard mix, soft mix and all with numeric inputs beside the sliders. And a really decent preview that's not dependent on the lights in the scene, but that's still a live preview. Another program to take a look at may be Kerkythea...really nice material GUI but powerful too. And no node spaghetti ;).
Laurie
Good stuff here :)
If they introduced a formal, parameter node type (BB currently uses Math and Simple color nodes for parameter nodes) or even just a formal naming convention for these, then it would be relatively trivial to write a UI engine that builds a form dynamically, for the end user, I would have thought.
I say relatively... that's relative to human resource, I guess.
Add to that a group node, as also suggested here, and perhaps a presets node, that stored an array of preset arrays, for the UI engine to read in from.
Everything there can still be stored in an mt5 format, or mc6 collection format even?
However, maybe the nodes are redundant?
These are just visual representations of shader definition objects...
If the real mat room gurus are happier just working with an object-oriented script interface, to compose materials, and everyone else just wants the form-based UI...maybe we don't need the nodes?
I do have to say, at the stage I'm at, at least, I think I still find them more intuitive to work with than pure script. But at times I suppose they probably also just obfuscate things for me too...
...looking at a matmatic script, which BB has posted in a thread, which pares everything down, has quite often made it all much clearer for me...
...relatively ;)
Certainly... pretty sure I'd still I understand a lot less about materials if all I'd had to work with from the start was an array of EZSkin-like wacros.
Quote - I was kinda told, in another material room rant, that presets (I'm all for 'em) are tantamount to cheating, even tho just about every 3D program I use has em ;)
If someone tells me that, I'm getting more of the impression that this person just doesn't want to share what he has or she has to offer.... or he or she wants to make you feel inferior, so they can feel superior. I like playing with the material room, but there are times I rather use a preset..... when I'm in a hurry or when I just can't get the result I'm after.
Going even one step further, how can anyone using Poser even say that using material rooms presets is cheating? Isn't the whole precept of Poser about using presets? You get a figure (preset), you add clothes (presets), you add a pose (preset), add lights (preset) and so on and so on. To say that using material rooms presets for Poser is a huge contradiction and goes against all the Poser in essence is...... especially when using preset after preset in all other occasions. You may as well say stop using poser and learn to model everything yourself.... since all you is cheating.
I'm glad though that there are people who like to share their presets, either free or as a vendor..... it's a great blessing to those of us who use Poser! So, I'm hoping as others that the next version of Poser will pay attention to the material room presets and adds lots of them! Let's hope they add more ways and features to cheat, so we can create great images :-)
Artwork and 3DToons items, create the perfect place for you toon and other figures!
http://www.renderosity.com/mod/bcs/index.php?vendor=23722
Due to the childish TOS changes, I'm not allowed to link to my other products outside of Rendo anymore :(
Food for thought.....
https://www.youtube.com/watch?v=pYZw0dfLmLk
Technically, mt5 and mc6 files are presets.
I now have a great collection of mat room presets, in that format, supplied mostly by BB...
...for which I'm very grateful ;)
However, what we're talking about here, as I understand it, and really I guess I'm typing this for the avoidance of (my) doubt, is at a different level... or levels.
The new scatter node in PP2012 has built-in presets, for Skin1, Skin2, Milk, Marble, etc.
There's that sort of level... i.e. having a node (and it might be a node group) that represents an aspect of a class of materials with a common property (e.g. in the above example SSS).
You might also have a Metal node... with some presets (e.g. for Iron, Steel, Aluminimum, etc)...
...although the BBGlossy (and now BBGlossy2) material covers a lot more than just metal, as I understand it.
So you might just be able to have a "Glossy" node, that covered metal, varnished wood, leather...
...aspects of texture could be in the preset perhaps. That's where this goes beyond the nodal format and into the territory more that EZSkin is currently trying to cover, I reckon.
Anyway, the group node, already discussed, would perhaps allow BBGlossy to be represented to the end user as a single node, with a bunch of presets.
Equally, the intermediate UI, also already discussed, could represent the parameters for the BBGlossy super-material (for example) as a form with handy dropdowns, sliders and choosers... and those same presets, for Iron, Steel, Aluminimum, etc.
That's how I envision it could / should work I guess...
...so, in short... I'd agree with what grichter said above ;)
Quote -
The only thing you need in the newer material rooms is the diffuse texture.
All the rest from Specular over Blinn, to Bump and Displacement can be build inside the math room, as some of us have been showing over and over again.
That sort of seems like the problem right there. Do away with those nodes and you force users to string together more complex shaders. Keep the nodes and you present users with a daunting array of unclear choices.
The room works as it is. Most people just don't know where to begin assembling a desired shader.
Simplifying it to keep it in line with the rest of the program makes sense. My suggestion would be to allow materials be built piecemeal from the library. Where we now have compound materials that may be modified, it would be nice to be able to store colour, lighting, bump/displacement/transparency etc. separately in the library. Then a user could pick a colour in the picker, choose if the material is a plastic or metal etc., pick from slight bump to heavy displacement, then choose the type of reflection etc. I think what most users want is less visual clutter and to be insulated from complex math heavy noodles. Being able to show/hide, copy/paste and move groups of nodes simultaneously would also be an improvement.
In essence, I think that is what snarlygribblys ezmetals are, advanced shader nodes with a simple pick and click function, choose gold, click the gold mat icon and voila gold preset applied.
If it can be done with these, and for instance with the out of the box PoserPro mats for advanced wood shaders etc, it really would be so easy......Yes ailkema was right, so many people choose Poser because of its' click and "pose/light/camera" etc functions, extend that function across the board, leave the advanced settings in the background for those that want them.blah blah etc etc nuff said
Some of the one-click materials ideas go out the window when you have a sliding scale renderer.
Or, in other words, some materials require advanced rendering options and a lot of Poser users will crank this down to minimum, destroying materials.
Solution? applying the material preset sends changes the render options to whatever it needs and sends up a warning to the user telling them not to change it. If renderer is already set higher than what it needs, it doesn't change anything.
Don't know if this is an idea: would auto-detection of hardware and setting of optimal defaults for lighting, and then, which materials would be available for those lights be an option? Just thinking outloud...
Monterey/Mint21.x/Win10 - Blender3.x - PP11.3(cm) - Musescore3.6.2
Wir sind gewohnt, daß die Menschen verhöhnen was sie nicht verstehen
[it is clear that humans have contempt for that which they do not understand]
My solution would be to just figure out how to make a high-quality renderer not take very long to render and leave it set for high quality.
Remember what life was like before MP3s? You had 60,000 meg files for one song in a wav, and that wasn't very handy.
All of the sudden, MP3 codec, and BAM...off to the races.
Someone is going to "Mp3" the rendering business.
Quote - Someone is going to "Mp3" the rendering business.
A friend of mine works at a professional studio, and the average song takes up from 2 to 200 gig of hard drive space depending on how many tracks are in it. Then it is mixed down to the final version that can be anywhere from 2 to 8 channels.
Crunching it to an mp3 does not eliminate all the steps that took it, from a hard drive per song, to a few meg of an mp3. Nor does it eliminate the time to record all those tracks.
Mp3 is a compression codec, it doesn't play the song, record it, or mix it down.
A render engine is a tad more complex that making an audio zip file.
It would be great if there was a way to get around all the math involved in rendering, but that wont happen on any hardware we have access too. Rendering has already basically hit a wall in what can be done. To see any huge reductions in rendering times is going to require new hardware designs that are based on quantum computing.
Currently there is only one quantum computer for sale, and it costs about 10 million to buy it, a small fortune to install it, and another small fortune to run it.
Some things are easy to explain, other things are not........ <- Store -> <-Freebies->
Yeah... mp3 is really just the equivalent to jpeg, isn't it?
Poser already saves a jpeg for you...
...although, sure, I tend to save from Poser as PSD and then use Photoshop.
Anyway...it takes me an average of a day or so to run a final quality render... after how ever many days of actually setting up a scene and test rendering.
But if I save the end result as a jpeg I can still get it under the 512K or so it needs to be to upload to the gallery here...
;)
It can take a while to "render" a decent mp3 from a master track too, e.g. from Pro Tools or Logic, by the way... quicker than a 3D render, sure. But it does still take some computing power to do that conversion to a professional grade of finish... without losing definition from the key frequencies.
I think the .mp3 equivalent for the rendering business is the GPU. Look at something like nVidia's realtime "Samaritan" video and try to imagine how long that would take to render using Poser's renderer. Within two years the expected level of real-time graphics realism will be unattainable to all but a handful of Poser gurus.
This video may still be in the uncanny valley, but it looks like they're getting pretty close to climbing out the other side... (Contains some colourful language)
http://www.youtube.com/watch?v=1va8FxBl7Cg&feature=player_embedded
Samaritan:
http://www.youtube.com/watch?v=ttx959sUORY
"… object model for NodeSets ( or 'shaders', like for Skin or Sea-Water or Hair or...) would be the go …"
Yes, I would think that they would have some common interface, the same way that COM objects have in order to work together. The inputs via properties and the internal processing via methods would differ with each object, but the outputs could probably be standard e.g. color, specular, displacement etc. You would have a standard library of built in objects and newly added ones would be available for use. There might be components whose only function was doing calculations. You could easily use high level objects without being concerned with the low level definitions. You could add properties to the fanciful Freckled Gecko Zombie skin and create a UI to adjust them. Of course, that raises the issue of dependencies – you would need to have all the constituent parts to use a composite object, the same as requiring morph packs etc. In principle, perhaps the same approach might be applicable to hair and cloth. It would be nice to able to use a distribution map for things like facial and pubic hair from within the material interface i.e. hair and fur are just materials. There would still be a separate ‘room’ for head hair of course.
Function FreckledGeckoZombie() as Material
set dp = New oNode(DermaPro)
set rs = New oNode(ReptileSkin)
set zombie = New oNode(DecayedFlesh)
dp.Preset = "Freckles"
rs.Preset = "Gecko"
zombie.Preset = "NewlyDead"
dp.Channels(Displacement) = rs.Channels(Displacement)
zombie.VictimFlesh = dp.Material
zombie.BloodTint = dp.PigmentEnd
FreckledGeckoZombie = zombie.Material
End Function
"There's that sort of level... i.e. having a node (and it might be a node group) that represents an aspect of a class of materials with a common property (e.g. in the above example SSS)."
Since SSS does cover a range of materials, I do wonder if it would be possible to have it as a type of calculation or service component. When called by another component, it would receive all the necessary inputs like depth etc. and return the proper results. Something like water drops might be a component/material object that could be used by just about any material – at least in general. A more specific skin implementation might consider that water on oily skin would behave differently than water dry skin etc.
"Democracy is a pathetic belief in the collective wisdom of individual ignorance." - H. L. Mencken
Keep in mind that mp3, like jpg, is a lossy compression...
----------------------------------------------------------------------------------------
The Wisdom of bagginsbill:
"Oh - the manual says that? I have never read the manual - this must be why."Wow moogal...that first vid was great :). The movements are really good, but I do see a few things about it that really do it in....lol. Seems no one can make the eyes look alive = until they can kill the dead eyes look, uncanny valley will continue. Heh. In addition to that, (the clothing especially) looked to "clean" for want of a better word. No subtle variation in texture, glossiness, no worn spots (like on shoes), etc. But it's really close ;).
edit: Oh, its a GAME???! Ok, for a game, that looks quite awesome. LOL.
Laurie
Quote - Yeah... mp3 is really just the equivalent to jpeg, isn't it?
You were probably being rhetorical but yes, it's lossy. Once you take a song and make a 128kbs mp3 out of it, you lose quality that can never be restored. You need the source again. True audiophiles can hear the difference, even at the lowest compression (and no, I'm not one :D )
.
O.k don't throw rotten vegetables.
In the midst of all the tech and emulation of real world phenomena somewhere the ideas of color theory and image composition I guess are of secondary concern? (an external exhalation of a much more elaborate internal dialogue).
I think RobynsVeil's comments about simple materials in relation to IDL ( which I use almost exclusively now for final renders, It's the soft occlusion) are interesting...
"So, simple materials - i.e., a texture map plugged into Diffuse(.85) and this added to a Blinn() node, then going into the Alt_Diffuse channel of PoserSurface() - look better and render better than highly involved node-set materials."
Is my screen cap headed in the right direction? The specular node going into the alt- diffuse is perhaps a little confusing, but I'm easily confused by the math room and so my interest in simple materials with the newer lighting features is understandable.
A poser 8 hobbyist user who is yet to be a jaded power user. ;)
Ps. It would be great if Parrotdolphin managed to make some of her lovely materials as presets for a hypothetical material room overhaul.
Quote - Is my screen cap headed in the right direction?
I've been reading in this thread (and others) about how people find the Mat Room complicated and frustrating, and I've previously been a little frustrated myself wondering "Why do they think that?" Now, I'm always going to take a deep breath before I implement some hugely complicated mathematical function in terms of Poser nodes but the basics and the workings and everything seemed pretty obvious to me. But seeing how you implemented "adding a Diffuse node to a Blinn node" in your screencapped example made me realize that it's easy for the people who understand the Mat Room to take those basic things as given.
In short, your set-up will not give you the result you expected (based on the quoted text in your message about adding nodes together).
The longer version:
When I'm interpreting a tangle of nodes, I always work out what the "chains" are, and more importantly what the ends of those chains are. Think of it in three stages: (i) inputs, (ii) processing, (iii) output. (It's a little more complicated in practice, but this theoretical approach works fine for sorting stuff out.)
To work out the rough groupings of these three parts, look at the chains that are being presented. (Or write them down!) Each of those red/yellow/blue/whatevercolour lines that connect the nodes are part of the chain, and if you trace a line of connected nodes back you will eventually arrive at a node that has no line entering it. (Lines coming out the left represent info leaving the node, i.e., pushing stuff further down the chain; lines entering a node on its right represent info entering the node, i.e., stuff coming in from earlier in the chain.)
Those nodes that have nothing entering them are, for consideration in this way of analyzing nodes, the raw inputs. They're not being changed by any other nodes (they've got no input from elsewhere that might affect them) so consider them part of stage "(i) input". These will typically be texture maps, pattern generators (Noise, Turbulence, Wood), simple colours, and a few others you'll learn to recognize.
Ignore the specifics of finding stage "(ii) processing" for now and look at where that particular chain links into the big node Poser Surface. That's the output. That's where whatever stuff has happened to your original inputs along the way (during "(ii) processing") escapes to the surface and appears when you hit Render. (As I said, though, it's slightly more complicated than that because Poser Surface can do its own bit of processing before the colours escape to the screen but that can be ignored for now.)
Everything between (i) and (iii) is stage (ii), the processing bit.
Now, why have I gone on about this?
Look at the screencap you posted and work out what the stages here are.
As I can see it:
(i) input = Image_Map. Nothing else is entering it, and as you'd expect, it's usual practice to start with an texture image as the raw stuff to begin working with.
(iii) output = Alternate_Diffuse. This is where the chain of nodes enters Poser Surface, so this is where the results of your processing escapes to the real world (or your screen, in other words!). There are a few caveats here and there about which outputs can be used (some have a few quirks, others have warnings about stuff to avoid, and yet others require certain things to be contained within the chain before it will look like you expect it to look). But Alternative_Diffuse is a good place to start (it has a requirement, but that's been covered so I won't digress further by going on about it.)
That leaves Diffuse and Blinn as the stuff between (i) and (iii), so by my definitions above, it falls into "(ii) processing". Which is fine, but only as long as you actually want that kind of processing to take place!
Look at where the input goes next, i.e., the first node in the chain of processing. It's a Diffuse node. This, in short, takes whatever input it's been given, looks at what lighting is going on in the scene, and puts your input through some calculations to make it appear like that input is being lit by the lights that are in your scene and you're viewing it from the angle/position that your camera is set at. That's exactly what you want.
But now look at the next node along the chain. The output of the Diffuse node (the nicely lit piece of texture map that you've supplied) is acting as the input to a Blinn node. The Blinn node, in short, looks at the lights in your scene and, depending on the settings you see in the Blinn node parameters ("Reflectivity", etc), works out what kind of highlights (small, wide, sharp, blurry, and so on) you should see on that surface. Exactly why this is slightly different (and more realistic) than a standard Specular node is not important, but effectively it IS just a specular node for this purpose.
Notice where the chain enters the Blinn node. Whatever processing has taken place before the Blinn node is arriving as input to the Specular_Color part of the node's values. This, as you might expect, affects the colour of what you see in the highlights. What you're saying here is "The highlights I would like to see on this surface must be coloured according to what my image_map looks like." But remember that you've also applied some processing to the image_map input (in the Diffuse node), so the full description of what you're asking it to do here is "The highlights I would like to see on this surface must be colours according to what my image_map looks like but first change what image_map looks like depending on how the lighting in my scene is set-up." This is not so usual but it's not unheard of (e.g., getting a metallic look will involve colouring the highlights depending on what the surface colour is supposed to be). But if you're aiming for what the original quote suggested, it's starting to lose its way.
Why? Well, the original quote said to ADD the two things (Diffuse added to Blinn) together. Here, what you're doing in mathematical terms... in fact, whenever the output end of a chain is plugged into the input of a node... is to MULTIPLY. Keep that in mind: Node inputs work by multiplication, not addition. Anything plugged into something else will take the anything and multiply it by the something, not merely add the two together.
Now, just to continue a little further... remember what the Blinn node is saying should happen during processing? It's effectively working out the highlights, nothing else. It doesn't colour the entirity of the object; all that specular nodes do is to work out where your highlights should be, and that's what their output is: Highlights.
So what drops out of the end of your chain is highlights and nothing more. That's feed into Alternative_Diffuse, meaning it's the end of the line and nothing else gets changed, so what will show up on the screen at render time is highlights. Mainly because your chain of nodes goes:
Input a texture.
Work out what that texture should look like depending on camera position and lighting conditions.
Work out where the highlights should be, what size they are, how blurry/sharp they are, and then colour them according to what the previous step found out.
Output those highlights to the screen.
Notice that even though you started with an image and fed it into the Diffuse node to give a nicely lit and textured surface, that information fails to escape to the surface and is used instead as input to your highlight/specular processing. What's needed is some way for the Diffuse node to also escape to the surface. There are at least two ways to do this simply using the set-up you have here, but I think I've written enough for one message! (But I'll continue if anyone finds this is useful.)
Oh, and an edit to say: This is not meant to be belittling or overly critical of you personally or your node set-up, primorge. Just my reaction to realizing that there are some essential basics in the Mat Room that people who understand it should keep in mind are not quite so obvious to everyone.
I find the Material room HIGHLY confusing even if I would look at it like you do.
There are way to many options that I just do not understand where to plug in what.
To understand it in a way that I could acutally use it for other than making fur and reflections I would need someone to sit beside me and explain it to me as if talking to a little child.
I am not the only one , there are so many that can not work the Material room so i do not feel dumb or guilty about it.
The inter-connectable, nodal building block metaphor of the mat room is already a simplification of the underlying mathematics from which a shader is compiled, I guess? ;)
(although, I suppose it could also be seen as becoming a convolution of that too, in places?)
Personally, I'd say I have a reasonable grasp of it now forming, and I rather enjoy using it... especially now I have a bit more of an understanding of the roles each node plays (in representing an underlying equation or set of equations)... and especially now I have built up a pretty decent library of ready made, preset shaders, in mt5 format... mostly supplied by BB, to play around with and learn a little from.
I'm originally a visual artist... but whilst I'm not a great mathematician, I've always enjoyed maths, and been okay at it, and, well, I now work writing fairly complex computer programs for a living. so I guess I'm in the group that's predisposed to tackling the process of learning to use the node metaphor... and keen to perceive and understand what is under the hood of that.
But, I do entirely see why a new intermediate level mat room UI is so warranted... and so wanted, by probably the majority of people... and it's something I'd very much like to have to use myself... in addition to the nodes, and indeed whatever else might extend that in a still more advanced direction (per what I understand as Robynsveil's earlier posited object-oriented material language idea)... e.g. I have been enjoying playing around with BB's matmatic recently. Took me a while to work up to tackling that though...
Anyway... Poser is after all, primarily intended as a tool for visual artists, isn't it? Whether they be hobbyist or pro...
...and a big part of its market appeal is, I guess, to the hobbyist... or at least "prosumer" market demographic?
The fact that it is a fun and extensible piece of software for the more programmatically minded to hack away at (whilst a great, indeed, I'd say essential factor in it's make up) is surely best being secondary to its function as a tool for making pictures or animations using 3D assets?
Quote - The only thing I would change in the Math room is remove/prohibit unlogical combinations.
Yes... very good point Vilters. This "simple" addition, of some more validation, would make a big difference to a lot of people I suspect.
A little pop up help description / usage tip (with up-to-date and accurate information) for each node would be a great addition too I think.
Quote - Chug chug chug chug chug chug...
I've just started a hair sim. It's using about 12% CPU and almost no RAM.
Looks like it'll take a few hours. Although it's using so few resources, Poser is blocked, I can't do anything else. Oh well...
I was thinking about how this background processing of sims could work recently... e.g. a cloth sim is operating on your scene, so it would be inappropriate to allow you to fiddle with the scene... or at least with all the actors being operated on... which could include much of the scene, potentially, e.g. floor, etc... for a cloth sim.
A hair sim may have less dependencies in the scene I guess?
But maybe a sim queue would be needed?
Maybe the whole scene would need to close down while the sim ran on it in the background... but you could then work on another scene?
Maybe it just needs more multi-threading, somehow, to simply make it faster...
I don't know if this is completely relevant, but the complexity of the material room with advanced nodes is daunting. I have yet to find a simple dummies guide to its use. I also percieve that when more experienced users are discussing advanced shaders, it tends to go right over my head and I often find myself looking to see if they posted examples of their shader set ups as I don't understand how to create them myself.
Maybe one big improvement to any future Poser editions would be inclusion of a nodes for dummies pdf, that explained exactly what each node did, maybe utilising some sort of chart similar to the ones in certain strategy games where to achieve x technology you first need to have a,b,c,d,e and g, so for example if you want to achieve wood , you would need to plug node 1 into port 5, etc etc. In fact just anything that made it simpler to understand.
Quote - I imagine a real-time/software hybrid render. The whole process will be reimagined from the root up, and it will be fast.
Well, you've got an Octane Render plugin for Poser and the much more integrated options for using LuxRender that Reality 3 will offer too... both coming along.
Both offer different levels of GPU support... and, well Octane at least, as I understand it, is probably at the cutting edge of what's currently possible there?
Will SM want to invest heavily in redesigning Firefly, or a replacement, from the ground up... or just continue to enhance the plugin infrastructure that allows interoperation with the above two render engines... maybe also with Blender and Cycles... VRay et al, if you're going for the more expensive high-end in your workflow...
...and interoperation with whatever else comes along, next?
This is the question on my mind.
I think Poser needs an internal, biased render engine, yes.
I'd really like to see them continue to enhance Firefly. I really hope it is not discarded any time soon.
But, IMHO, I think the most we can realistically hope for is some incremental improvements to Firely, to bring out the best that it is currently capable of?
I'd rather push for the possible, than the impossible... if you see what I mean?
Well, I'm not a fancy, big-city poser user and I don't know much about the Politics and Mechanics of working on a render engine.
I just know what I need, and Firefly ain't it.
Down here in Farm country, when you want to plow a field, you're going to need a tractor.
To my simple mind, Poser is trying to hook 'ole Bessie up to a backhoe and that just isn't cutting it for this Corn-and-dairy operation.
Quote - Well, I'm not a fancy, big-city poser user and I don't know much about the Politics and Mechanics of working on a render engine.
I just know what I need, and Firefly ain't it.
Down here in Farm country, when you want to plow a field, you're going to need a tractor.
To my simple mind, Poser is trying to hook 'ole Bessie up to a backhoe and that just isn't cutting it for this Corn-and-dairy operation.
Yeah, fair enough... and I like that metaphor more too :)
I just think that a new tractor is a bigger investment than we'll likely to be seeing at this point in time?
If they could make that investment and not pass that cost onto us poor grain consuming chickens... then hallelujah!
But I suspect what we'll see is ongoing repairs / patching... maybe some new Eastern European or Chinese parts for the current tractor... at best. He he...
(personally I'd settle for that, at least for the time being, because I can get results I'm happy with already)
I suspect that anyone that wants more will have the to option to pay extra for a... okay the metaphors gone now...
...anyone that wants more will have to fork out for Reality 3 or the Octane Render plugin, plus Octane Render.
I am perfectly open to being proved wrong of course... always am ;)
This site uses cookies to deliver the best experience. Our own cookies make user accounts and other features possible. Third-party cookies are used to display relevant ads and to analyze how Renderosity is used. By using our site, you acknowledge that you have read and understood our Terms of Service, including our Cookie Policy and our Privacy Policy.
Go to the Adobe web site and look at the creative cloud and look at the different plans they offer. Part of it is what you want. Now migrate that same model to Poser and include content, render farms, different render engines, tools and somehow I don't think you will get what you want for 15 dollars a month. More likely it will be $50 or (a lot) more. For some it will be a lot cheaper as what they spend now, but for most it will be a lot more.