~Foxy Toon~ by Ken1171_Designs
Open full image in new tab Members remain the original copyright holder in all their materials here at Renderosity. Use of any of their material inconsistent with the terms and conditions set forth is prohibited and is considered an infringement of the copyrights of the respective holders unless specially stated otherwise.
Description
~Foxy Toon~
Having fun with toon style. Pose and camera set in Poser, rendered in Stable Diffusion, postwork in Paintshop.
Comments (2)
poser4me Online Now!
Most excellent! The girl next door.
Ken1171_Designs
Thank you! I love this style. 😁
JohnnyM
I like how your girl looks, you are doing a great job in showing us how seamless your script works in exporting the pose from Poser to Stable Diffusion. I have a question for you...are you able to duplicate the look on this girl using a different pose while she wears the same clothes, fashion accessories and hair?
If Stable Diffusion is able to do this same exact look while using different poses, why would it be difficult to create an animation using a set of frames stitched together in a movie editor in order to create a moving scene. If not able to do it now, What's your guess as to how far away you think it will be till the day when we can all do animations the way you do these wonderful still renders?
Keep up the great work and have a great Halloween! :-)
Ken1171_Designs
Hi JohnnyM, thank you for the thoughtful feedback! ☺
Temporal consistency was initially impossible, but now there has been considerable effort towards that. You might have seen some AI animations on YouTube, where consistency quality varies a lot. I would say we are not there just yet, but AI evolves really fast these days. Things that used to take months or years, now happen in a matter of weeks.
Besides temporal consistency, we also have issues with camera occlusion in a similar way that camera-based mocap (like the Kinect) loses track of joints that are obstructed by something else. Whenever I tried a frame-by-frame animation from 3D to AI, this was more troublesome than temporal consistency. For example, in a simple walk cycle, whenever an arm (or leg) goes behind the body, the AI "gets creative" with where it should be. It can go in ANY pose. The same used to happen with Kinect mocap. The basic idea behind this is that the mocap is capturing data at a fixed rate, and when a joint disappears (hidden behind something), it basically records random data. It's similar with AI, where when it can't see a joint, it goes random with its placement.
I have been loosely following the AI progress, and both issues are still present with the current tools. Based on results I have seen on YouTube, temporal consistency is LESS of an issue compared to camera occlusion. The way to fix this with camera-based mocap is just to add more cameras, but this cannot be done when the input is a video or image sequence where there is only 1 single perspective. Hard to tell how AI could fix this. :)