Forum Coordinators: RedPhantom
Poser - OFFICIAL F.A.Q (Last Updated: 2024 Dec 11 2:52 am)
Well, my friend, you've got the biggest question in the Superfly-verse. It depends a lot on what you want from your render and what type of effects are in your render. Here are some tutorials that will help clarify so you can make an informed choice on your render settings in Superfly.
Poser 11: Superfly Render Settings
How to get the best results from Poser 11's Superfly render engine
These should really get you going.
Render On!!
Boni
"Be Hero to Yourself" -- Peter Tork
I have to say, that after the limited time I used Superfly, it has great potential but I'll probably die before I get a render I'm happy with. It's kinda like life... everything takes longer and costs more. Doubled.
Coppula eam se non posit acceptera jocularum.
It is "almost" unusable. It does do a better job on some things, particularly humans. Could just be me. I have a 16 core cpu, and a fast gpu, which I thought would be enough. Nope. I found that if you set large buckets for memory, even if you have lots of memory, Superfly will just blink out. It stops to a blank render screen. But, i was doing it wrong which I found out thanks to Boni. 64 is suggested bucket size and then it works fine, just takes several lifetimes. Even in the video the narrator suggests that if you set things up on high settings, the render will not finish in your lifetime, and he is using a Xeon Cpu, which should be quick...er. Thanks everyone!
Sounds very odd. To give an example - 4 core 6700 and founder edition 1070 rendered 1250x924 res scene below with 18 figures + conforming clothing for each figure with 100+ textures in about about 12 mins whilst I watched youtube and read your post. I'd expect a 1080ti to be sub 10 minutes. I had the bucket set at 512
Thanks Iron. I was able to run one with your settings, using my cpu. Not too long. I then tried with my gpu, K-rash. Went to immediate poser gray screen. Weird! So I set the bucket size lower, tried again, same thing, instant crash. This should not be happening, I says to myself. Did some research and came up with this: CUDA error at cuCtxCreate https://forum.smithmicro.com/topic/3854/cuda-error-at-cuctxcreate-launch-failed/11
You have to change a register key and then it will work. In my case I had to ADD the missing key, but it's working now. Sheesh, how is anyone supposed to know this stuff?!!!!
My scene was 2 figures and 1 prop, big prop almost a figure itself. LOTS of reflections from robot skin, pistols and machine prop. Anyway, Superfly rendering scene using CPU 2056.62 seconds or 34 minutes Same scene using GPU 2023.06 seconds or 33 minutes, not a huge difference
Also, I was rendering to 6600 x 10200 at 600 pix/in, NOT 1427 x 1080 so that could account for the "takes forever" part. I like to make large images and shrink them down a bit in photoshop for sharpness. Probably wrong about that too, but then i can make poster size prints.
Anyway, thank you very much for your help. I'll probably play with it some more to see what I can get away with time-wise and resolution-wise.
Content Advisory! This message contains nudity
It's working now, and I used snarly gribblys' ezskin, and I used Ghostship eyes because a v4 render of her eyes really made the white of her eyes glaring. The eyes were great. I rendered pretty much with your settings, except I bumped the pixel samples a bit to 35. Comparing a Superfly with a Firefly render, yes there are differences, but in this case, the Firefly looks better to me, IMHO. I was really hoping for a more realistic "human" look to the M4 and V4 faces though they DO look pretty good in both. There are some interesting peculiarities. I put a lightning image on a basic disc and made it transparent (the disc) so only the lightning shows. Worked well in Firefly, not so good in Superfly (not terrible) In superfly it renders the lightning on BOTH sides, slightly offset so it looks doubled. Not a bad thing, probably has more to do with my camera set slightly off from center so it rendered perfectly. I decided to attach an image. LEFT is Superfly, Right Firefly. Hard to show nuance with a small image byte size. I used shiny silver material for parts of her body (android). I marked this as nudity just in case as her skin is skin, not clothing. I couldn't get rid of the floor in the superfly image which was a little weird. Worked fine in superfly. (probably ignorance). They are both good in many ways. I like her face better in superfly. It appears more doll like in firefly. But the skin is shinier in firefly which I prefer.
This project is getting a little old to me and I'm getting itchy to move on. :) So firefly will probably win this battle. Many thanks Iron. Really appreciated. Happy Holidays and Happy New Year.
Thanks putrdude.
Think the main thing is to pick the render engine you have the most fun with or failing that the one least likely to drive you insane (too late for me, already a few buckets short of a full render). If you decide to pickup Superfly again I'd recommend trying HDR lighting, it may not be the style you want but its worth checking out - if after the holidays you interested in doing this send me a PM. Happy Christmas:)
Thanks Iron! You are so right about the insane part. I just rendered an image overnight only to find that only two eyeball shadows rendered hahaha. "a few buckets short of a full render" I like it! Only crazy animator poser/other artists would even get it. I will try Superfly again, especially now that its working. Merry Christmas and Happy New Year! Thanks again.
A big thing in Superfly is Ambient in shaders. Superfly actively checks lights, every material with ambient IS concidered to be a light. Removing Ambient from materials which aren't supposed to be glowing will make you images clean in much fewer samples.
By the way, the amount of samples you need to get a clean can vary greatly. I usually make a partial render of a part of skin to see how much samples I need.
Which graphic card do you have? I have a GTX 970. I can render images with it at roughly the same speed and I can with my 8 core 16 thread Ryzen. If you have a 16 core xeon, things may be similar for you. Try a render with the following settings:
Please take note: you need smaller buckets for CPU rendering. This will affect speed a lot.
Use a smaller resolution for that one, render for 10 Minutes or so, then cancle the render. Then do a Cuda render with this settings:
Let it run for the same time before canceling.
Then check both images closely - which one is less grainy? That's the way of rendering which is faster on your system.
After that you should check with small parts of the image in full resolution how much samples you need to get a clean image. Use skin or shaded areas. Then render tho whole image with that number of samples.
You will get lesser render times. And remember - make sure you don't have materials with unneeded Ambient. This can easily multiply the number of needed samples with three or four.
A ship in port is safe;
but that is not what ships are built for.
Sail out to sea and do new things.
-"Amazing
Grace" Hopper
Avatar image of me done by Chidori.
Many thanks, Bantha, I'll try your settings. I would think 600 pixel samples would give crystal clear results. I dont know what i'm doing with about shaders or ambient settings...sadly. I suppose I should by now, but there's always another project looming. Anyway, I'll give it a shot. Turned out my 1080 ti was a hair faster than my 16 core Amd chip. Thanks again!
I don't understand why render time is such a big worry for people?
Render quality should be of primary importance. Superfly can produce renders far superior to anything that ever came out of Firefly. It's an unbiased renderer so naturally those results are going to take longer, especially when you include things like refraction/caustics, strand based hair, etc. Superfly isn't "slow", it's just that Poser users have gotten used to using heavily biased renderers. People had the same complaints when Firefly came out, even though it was light years ahead of the P4 renderer.
Your typical workflow should be:
Create a small preview size, Render Dimensions: Fit in Preview Window, turn on Render Settings: Progressive Refinement and use that to do some quick preview renders of your lighting/scene.
If your preview looks good, zoom in and doublecheck areas like eyes, etc with Render: Area Render, rendering only that small area.
When everything looks good, crank up the render dimensions, increase the samples and let it render while you go do something else. If it's a complex render go watch a movie, or let it run overnight while you sleep.
The 600 samples in my example are not thought to render up to the end. The idea was to let it render for a certain time, then cancel the render. Usually, I use 35 to 40 samples for clear images. Often, I just let it render (in progressive mode!) and just cancel after a couple of hours if I see that no grain is left.
A ship in port is safe;
but that is not what ships are built for.
Sail out to sea and do new things.
-"Amazing
Grace" Hopper
Avatar image of me done by Chidori.
putrdude posted at 2:25PM Fri, 14 December 2018 - #4341792
I seem to see grain even at 35 samples.
This is missing information required to make the statement useful. It is as if I said "I still don't have enough money at 35 dollars". Enough for what? A hamburger or a BMW?
35 samples can be more than enough for some situations and others be short by a factor of 10.
Renderosity forum reply notifications are wonky. If I read a follow-up in a thread, but I don't myself reply, then notifications no longer happen AT ALL on that thread. So if I seem to be ignoring a question, that's why. (Updated September 23, 2019)
Looking at someone's render settings, trying to use them directly, without justifying those settings against the required similarity of YOUR scene to THEIR scene, makes very little sense. Consider this:
Here is a scene lit with 4 point lights and an "evening" sky dome outside. Almost all the light is coming from the point lights. This is 35 samples. This many samples is more than enough. This scene is a "hamburger" for which $35 is more than enough.
Now for the "BMW" scene. Note that the props and materials are identical, and on those terms you'd say the "Hamburger" is pretty much the same as the "BMW". But the light situation is completely different, and 35 samples does not work here AT ALL, same as $35 is not enough for a BMW.
Renderosity forum reply notifications are wonky. If I read a follow-up in a thread, but I don't myself reply, then notifications no longer happen AT ALL on that thread. So if I seem to be ignoring a question, that's why. (Updated September 23, 2019)
Why is the second image so grainy? The light has to be found by random rays and most of those rays do not go through the windows. The only way to find the strong light is through the windows, and since most of the rays do not go there, most of the lighting is very hard to find. Therefore it takes a very large number of samples to fill in all the missing dots of light.
In the first image, the lights are inside the room and they are Poser "Lights" not geometry. Why do we even have lights? Because they're special. Lights, unlike glowing geometry, are KNOWN sources of lighting info and the render engine does not randomly shoot rays HOPING it hits a light. It specifically sends a ray DIRECTLY at the light. This is the reason we even HAVE lighting objects in renderers. If we didn't need them, we would all be using glowing polygons.
Renderosity forum reply notifications are wonky. If I read a follow-up in a thread, but I don't myself reply, then notifications no longer happen AT ALL on that thread. So if I seem to be ignoring a question, that's why. (Updated September 23, 2019)
So - having established that lights "attract" the attention of the rendering engine and help it find useful solutions to the rendering equation in fewer steps, it necessarily follows that if you have judiciously used "lights" to good effect so the rendering engine locates them with ease, then it needs fewer samples. A lot fewer.
There are different kinds of lights. It would seem, then, that there are pros and cons to these, otherwise we'd only have ONE kind of light. This brings us to another level of distinction. Point lights, spot lights, area lights, infinite lights - these all use different math with different consequences on the rendering equation. Of course since the equation is different, the samples needed becomes different.
Even if you and I use the SAME kind of light but we use different numbers of them, OR put them in different places, the resulting rendering equation is now DIFFERENT. The minimum workable samples will ALSO be different.
Trying to use the same samples (just that one number) is really setting you up for failure.
In fact, trying to use all the other numbers the same (without consideration of what they do to the rendering equation and in the context of YOUR scene) is setting you up for failure.
Renderosity forum reply notifications are wonky. If I read a follow-up in a thread, but I don't myself reply, then notifications no longer happen AT ALL on that thread. So if I seem to be ignoring a question, that's why. (Updated September 23, 2019)
Generally for "good" renders I set up a ridiculous number like Bantha, 600 or 999 samples, and use progressive mode. I look in an hour and if it's clean I stop. I look in 4 hours and if it's clean I stop. I look a day later and if it's clean I stop. Otherwise I just let it run and don't worry about guessing the right number of samples. Once you've committed to 24 hours, another 12 is nothing. But if you auto-stop at 24 hours, another 12 actually requires another 36. That sucks big time.
Renderosity forum reply notifications are wonky. If I read a follow-up in a thread, but I don't myself reply, then notifications no longer happen AT ALL on that thread. So if I seem to be ignoring a question, that's why. (Updated September 23, 2019)
If SM would let us "continue" a render then I'd not do it that way.
Renderosity forum reply notifications are wonky. If I read a follow-up in a thread, but I don't myself reply, then notifications no longer happen AT ALL on that thread. So if I seem to be ignoring a question, that's why. (Updated September 23, 2019)
I always found other ray tracers hated rendering point lights with scattering or high bump specular mats so never tried them in Superfly. HDRI maps have the benefit of the rays always terminating on a light value and not just spinning off into space. The other problem I find with using light objects is its necessary to add geometry and realistic mats for the lighting to work - consider sitting in a room at night lit with a single light bulb, the shadow produced is caused by the light reflecting off the walls, ceiling, floor and everything else in the room, to render that convincingly requires a room to be created in the scene and mats setup to reflect the light correctly. Although fake, HDRI lighting will introduce a level of ambient lighting without all the extra work. This is why I think it is better to start off with this type of lighting before moving on more complex lighting setups.
I don't think that ray tracers hate point lights. It's just that in reality, we usually don't have point lights, all our light sources have dimensions, size. Superfly uses spheres for point lights, so that should not be a problem. Real point lights look unrealistic because they are unrealistic. Its kind of lighting everything with a very tiny, ultra bright LED light, and that does not look good in reality either.
If you want to create a realistic environment, you need the reflected light from the walls, that 's correct. The ceiling, the floor. Superfly will do that for you, without an HDRI-map. In the scene above, I assume that the second render would be less grainy if BB would not use an environment sphere, but just that couple of polygons you can see through the window. In addition, the scene would have been even more grainy if there would be a light emitter (not nessesarily a light, some prop with ambient in the shader is enough) outside the wall.
In Blender you have a way to "bake" the lighting. For certain parts which don't change in the render, like the ceiling and probably the walls, Blender will calculate a new texture which is used as a light source. The material of the wall or ceiling is changed to a light, shining with that calculated texture. This won't work for the floor, since you won't get shadows on the floor then, it won't work if someone is close enough to the wall to shadow it. But if you stay within the limits, this baking will allow you realistic renders in very little time. The game industry uses it all the time, since there is no way to calculate ambient light in real time yet, as far as I know.
And thanks BB to explaining everything much clearer than I did. Great work as always.
A ship in port is safe;
but that is not what ships are built for.
Sail out to sea and do new things.
-"Amazing
Grace" Hopper
Avatar image of me done by Chidori.
I tried placing two area lights as strips outside the front and back windows of this room. It did make slightly less grain but it also totally ruined the realism. The effect of a sky is unmistakable and very difficult to shortcut through other means.
I think I need to raise the light strips so the light is coming from a higher source elevation.
Renderosity forum reply notifications are wonky. If I read a follow-up in a thread, but I don't myself reply, then notifications no longer happen AT ALL on that thread. So if I seem to be ignoring a question, that's why. (Updated September 23, 2019)
I tried other positions for the strip lights and all created artifacts of one sort or another that do not match the lighting of a sky dome.
So - back to sky dome. I turned on branch path tracing with 3 diffuse samples, which gives more emphasis to solving the diffuse lighting. One thing to note is that when you enable BPT, glossy reflections are much brighter. I don't know which is correct, reflections with BPT or without, but my intution tells me the BPT version is realistic and there's some sort of bug when not using BPT.
Note I can only do CPU renders. GPU may do something completely different.
Anyway, I ran 500 pixel samples, and went off to do other stuff. Upon my return, the render status showed 2216 samples (which corresponds to about 46 in render settings pixel samples).
This looks good to me.
Renderosity forum reply notifications are wonky. If I read a follow-up in a thread, but I don't myself reply, then notifications no longer happen AT ALL on that thread. So if I seem to be ignoring a question, that's why. (Updated September 23, 2019)
And after a couple minutes with a postwork noise reduction, I think I got a pretty good result without doing another couple hours rendering.
Renderosity forum reply notifications are wonky. If I read a follow-up in a thread, but I don't myself reply, then notifications no longer happen AT ALL on that thread. So if I seem to be ignoring a question, that's why. (Updated September 23, 2019)
Just to add to things that can speed up things, remove or hide things from the scene you don't need. Especially the standard environment (Background prop, if I remember correct) increases render times when its on, as its basically a prop around your scene that the rays will interact and bounce off, so hiding that if you don't need it will improve render time as well.
Another way you can test your scene is to render it in progressive and after a while you will be able to see areas that are more grainy than others, So you can use those as test areas using the Render area functionality and just adjust the settings until you are satisfied that these areas look as you want them. That way you will get a good idea of what settings you need.
Did a test in another post with the min and max diffuse bounces and from what we figured out in that thread was that by lowering max diffuse bounce to 1, you will loose some quality in the bounces, but it cant improve render speed quite a lot, so you can experiment with that as well and see if that works for you.
Yet another thing you can do, is to render the image slightly larger than what you need, so say you aim for 800x600 then you render it as 1000x750 instead and then scale it down in photoshop, that can reduce noise as well as pixels starts to "blur" together. Obviously you shouldn't go nuts here rendering 6000x4500 if you only need 800x600. But even though you render a larger image, you might be able to use fewer samples and therefore increase render speed without loosing to much quality.
Last if you only need one image for the cover, it might be worth as others have mentioned to simply spend the time needed to let it render in good quality. :)
In general, give samples/bounces for effects you're using, and don't waste samples/bounces for effects you're not using.
If you have glass with thickness in the scene, a ray must be able to hit both sides of the glass, bounce off an object behind, then hit both sides of the glass on the return trip. So, I'd set max bounces and max transparent bounces to 16. That doesn't force sixteen bounces; if the light ray path is completed in five bounces, then that is all that Superfly uses for that ray. If you don't have any volumetric materials in the shot, then no need to waste render time with volume samples or volume bounces.
I generally use progressive refinement, and set the overall samples quite high, then stop the render once it's clean enough.
Poser 12, in feet.
OSes: Win7Prox64, Win7Ultx64
Silo Pro 2.5.6 64bit, Vue Infinite 2014.7, Genetica 4.0 Studio, UV Mapper Pro, UV Layout Pro, PhotoImpact X3, GIF Animator 5
Thank you. I've learned a lot from this thread, and it certainly helped me render a wide night-scene lit only by 'super-ambient' street lights.
https://www.deviantart.com/nik-2213/art/Park-path-by-night-99pixel-Superfly-render-843031050
One gotcha: after much Googlin', I learned I did not need a RegEdit to turn on SCUDA for my twin GTX 750 Ti cards, their NVIDIA control panel let me toggle the option. As the 'program' list only found P7, I used the 'global' route. And, as each card had 640 cores, I was able to set 1200 bucket, leaving a few for Windows UI...
This site uses cookies to deliver the best experience. Our own cookies make user accounts and other features possible. Third-party cookies are used to display relevant ads and to analyze how Renderosity is used. By using our site, you acknowledge that you have read and understood our Terms of Service, including our Cookie Policy and our Privacy Policy.
I've always used Firefly, until lately, hoping to make a more lifelike render for a book cover. Experimenting with Superfly, but omg it takes forever and it doesn't have to be a complex scene or even a whole scene.
I did find Ezskin by Snarly, and it worked wonders on the eyes.
What settings are 'acceptable' or decent using the presets in Superfly? I have a very fast machine and a decent Video card, 1080 TI. The CPU renders are faster than the video card, but I'm lost as to how much better the BEST preset is vs. the medium. And I can't wait until I'm dead for the results. Any suggestions for settings for superfly that I'll see the results this century? Thanks!