Forum Moderators: Wolfenshire, Deenamic Forum Coordinators: Anim8dtoon
Photoshop F.A.Q (Last Updated: 2024 Nov 04 10:41 pm)
Our mission is to provide an open community and unique environment where anyone interested in learning more about Adobe Photoshop can share their experience and knowledge, post their work for review and critique by their peers, and learn new techniques while developing the skills that allow each individual to realize their own unique artistic vision. We do not limit this forum to any style of work, and we strongly encourage people of all levels and interests to participate.
Checkout the Renderosity MarketPlace - Your source for digital art content!
Hello White Raven! I have come across some 2D software called Fantamorph before. I think that may help you do what you want to do. I haven't tried it before. I think before you use FantaMorph you may want to use something like FaceAge or another program that will make the male and female faces look younger before you morph them. I don't know if that was the answer you were looking for, but there it is. I hope I've helped a little bit. I wish you good luck on your character!
Denesia (Dee)
Moderator
Forums: Photoshop ☮ Photography ☮ Mixed Medium ☮ Freestuff ☮ Freestuff Testing ☮ Changes wanted in the Community ☮ Marvelous Designer ☮ Challenge Arena ☮ Animation ☮ Fractals ☮ Beta Testing ☮ The Get-Away Spot ☮ Newcomer Corner ☮ Suggestion Box ☮ Contest Announcements
You might consider 2 or 3 techniques (no matter which tool).
Blending: You have a face in one layer, and the other in another layer. Combine them in a variety of layer blending modes. Screen mode, percentage, texturize, additive etc...Not sure how many layer blending modes PS will gve you but there are usually a lot, and you can repeat and combine for more effects. Combine several subtract, divide and multiply modes etc... Try about 30+ different modes of layer blending seen in many programs (including dogwaffle even though the key 'replace' mode is not present in that case, dogwaffle is a bit limited when it comes to alpha-based per layer blending. But math operations are plentiful, there and in most other apps
Morphing: Key features from one face image are identified on first image (tip of nose, center of ech eye, left and right of each eye, tip of cheek bones, etc... identify 10-20 key elements that you can see clearly in either image. Define how each of the define key points move from one face to the other. The linear interpoltion that drags the point across and with it the neighborhood of pixels in a way that makes it transform from one image to the other. Adjust the transition level and you'll have a morph of a bit of face 1 and a bit of face2. This is what many of morphing tools let you do. The key points might get sophisticted in some cases, such as key shapes (lines, ovals,...) or how strong he attraction of a key element is (weak to strong, linear fade, parabolid, logarithmic, sudden, gaussian, fade etc...
Do both, with motion interpolated estimation. Instead of identifying a limited number of key markers, make the entire image carry key markers all over. Imagine a grid across the whole image, perhaps of square regions just 12x12 pixels in size, (adjustable but each square the same size). At the center of each sqaure, we place a motion tracker. DO the same for each image. Actually the software does it for you. Then ask it to calculate the path for each little square, each 'tile': for any tile in the first image, where did the tile move to in the second image. In some cases there may be exact matches and it may find that it didn't move at all and is at exactly the same location, most likely it will see it nearby, at a slightly or very different position. It may not find the exact same tile anywhere, so it calculates correlation patterns to see which one is the best match. A corner of an eye is still a similar corner of an eye even if slightly different. The motion estimation approach will find where things went from image 1 to image 2, and will let you set thresholds, so that it can simply to belnding of the tiles if nothing close enough and similar enough is found for a given tile. You also get to tell it how many frames of interpolation you want. Try 5, 10, 30, 100 watever.... once it has an idea of where each tile moves to from frame 1 to frame 2, it allocates memory for the in between frames (5 frames, 10, 30...whatever you asked for) and starts interpolating, moving the tiles from point A to point B and also interpolating their pixel values (in either RGB space, 3 channels in parallel) or in HSV space, or other. Remember, most tiles didn't find their identical counterpart, they'll interpolate to point B as they also morph (move) to it. It's sometimes called the wiggle factor or something else indicating it is related to noise, and how much of it the algorithm detects in the tile. Too much noise and it will give up and resort to blending. (for example when looking at fast moving foam on top of fast eratic ocean waves - next to impossible for motion tracking ).
In the end, you'll have motion estimated interpolation (aka motion prediction). You'll see a whole bunch of frames that start the from known image 1 and finish at known image 2. The rest are interpolated images that show movement has happened on all or most tiles of the images.
Then you pick the image that fits your needs. If you can find one. This technique is tricky if there is too much movement between the two start and end images. It works great for things that change a little all over, such as due to video noise, or just small changes. In many implementations you can adjust the size of tiles, and the distance of how far to look for possible candidates.
There will be fine border regions that can't be used (because it has no idea what lies outside of the image). It can guess. Just crop it off. Most automated Motion Prediction syste,s crop automatically and resample/resize the remaining image to the original size. You'll notice a little bit of zooming in. You can add an extra region around the original images too, before letting it do the M.E., so that it may feed off of the same outer pixels. Or whatever you add to the outer 'frame'. It may hep to have the same frame in either of the two images.
I don't know if Photoshop has such a feature. It is something you see more with animation, video. PS does have quite an arsenal of tools for handling animation nowadays. But still, motion estimation is often the thing of a separate plugin at $99. Look around if you find one for PS. I assume you're looking to do this (or at least significant arts of it) in PS since you posted here.
If you'd like some examples of the technique #3, look at the Motion Prediction module of project Dogwaffle (sorry, WIndows only). http://www.thebest3d.com/howler/7/motion-prediction-module.html
This version is not running on GPU yet, but it is multi-threading enabled, so the more CPU cores you have the faster. It is very compute intensive. Here's some numbers to give you an idea.
http://www.thebest3d.com/howler/7/motion-prediction-mania.html
I hope tis helps. If you find a motion estimation technique suitable to your tools, I'd love to learn about it.
This site uses cookies to deliver the best experience. Our own cookies make user accounts and other features possible. Third-party cookies are used to display relevant ads and to analyze how Renderosity is used. By using our site, you acknowledge that you have read and understood our Terms of Service, including our Cookie Policy and our Privacy Policy.
I was wondering if there is a way to merge a male and female face to create a "child"? I'm working on an idea that will show a character of a new protagonist that is the young child of the previous main characters.
Thanks in advance!