~Fortune Teller~ by Ken1171_Designs
Open full image in new tab Members remain the original copyright holder in all their materials here at Renderosity. Use of any of their material inconsistent with the terms and conditions set forth is prohibited and is considered an infringement of the copyrights of the respective holders unless specially stated otherwise.
Description
~Fortune Teller~
This 3D scene was setup in Poser using a wide angle 13mm camera lens for dramatic perspective effect, and placed a bare woman in front for composition. She doesn't need textures, clothing or hair because those are added in Stable Diffusion. All I had to do was to render a depth map for the whole scene, and use it in ControlNet to recreate the entire scene in a different style, fleshing out the details with a prompt in Stable Diffusion.
Some things changed, but the general composition is there, including the woman, her placement and pose. This way I can create the scene in 3D, and not worry about materials, clothing, or lighting. All this can be added in SD with a text prompt, where the chosen AI model checkpoint defines the general style. It's like using SD as a rendering engine for 3D scenes. It's a powerful way to create scenes with a lot of control.
To make this work, the trick is to make the depth map effect only active during the first 10-15% of the image generation time, giving the AI plenty of time to flesh out the details with my prompt. 3D scene and camera set in Poser, rendered in Stable Diffusion, postwork in PaintShop.
Comments (4)
calico1
Thanks for explaining. Using Stable Diffusion in this way seems possibly useful. Is there a good place to learn SD to enable this sort of result?
Ken1171_Designs
This particular method uses a depth map, so it only carries the shapes, volumes and placements. All materials and other details are set with your text prompt, which gives me full freedom to "decorate" the scene any way I want. For example, I said the scene happens at night with a full moon, lit by gas lamps, the garden has grass, bushes and flowers, and the woman is blonde, wearing medieval clothes. But what if I said it's a sci-fi or cyberpunk scene? Anything is possible. It's rendering a 3D scene from Poser at another level of flexibility.
Learning Stable Diffusion with the Automatic1111 web interface is much easier nowadays, with tons of video tutorials on YouTube. In comparison, when I got started, SD didn't even have an interface - it was just Python code running in a command line shell! LOL
However, the key element for this level of control is mastering the ControlNet extension. Without it, we just have no control of anything. For instance, my OpenPose plugin for Poser cannot work without it, and all of these online AI image generators have nothing like it. That's why I stick with SD, and not Midjourney or Dall-E. Here again, there is plenty of tutorials on ControlNet on YouTube, so once you are comfortable with SD, ControlNet should be the next in line to learn. 🙂
shadelix
Wow! Now you got me interested. LOL Now I understand your explanations a lot better.
Ken1171_Designs
Thank you! I think this image better demonstrates the concept. It's like composing the scene in Poser, and then fleshing it out with a text prompt in Poser, where I have full freedom to "decorate" the scene, like making it a full moon night scene, add the flowers, gas lamps, and give it a medieval flavor, to include what the woman is wearing. ^___^
RodS
This is a great look at how the process works! And what an amazing result on the right image!
Ken1171_Designs
Thank you! This 3D-to-AI is a powerful and flexible workflow. It can give new life to even ancient figures, like Posette, and make her look brand new with AI augmentation. ^___^
calico1
Thanks Ken!