60 threads found!
Thread | Author | Replies | Views | Last Reply |
---|---|---|---|---|
staigermanus | 0 | 148 |
(none)
|
|
staigermanus | 6 | 565 | ||
staigermanus | 0 | 145 |
(none)
|
|
staigermanus | 0 | 34 |
(none)
|
|
staigermanus | 0 | 116 |
(none)
|
|
staigermanus | 0 | 368 |
(none)
|
|
staigermanus | 5 | 893 | ||
staigermanus | 2 | 68 | ||
staigermanus | 2 | 290 | ||
staigermanus | 1 | 229 | ||
staigermanus | 0 | 47 |
(none)
|
|
staigermanus | 7 | 277 | ||
staigermanus | 0 | 39 |
(none)
|
|
staigermanus | 1 | 50 | ||
staigermanus | 0 | 57 |
(none)
|
599 comments found!
oh I know that if I make my own models, such as in Blender, Carrara or other tools, there will be a material editing section, a shader room or whatever it's called case by case. It's of course possible to make changes there in the native app where the model came from. Or even, say for example, if a model was saved as Wavefront .obj with .mtl materials, it can be re-imported in even other apps, and materials re-edited.
That is kind of what I have in mind, since in PD Howler we are starting to have PBR material capabilities. So whether I make my own model, or import one that was acquired, I imagine there are tutorials and tips/tricks focused on just the material props and how to edit them for various effects such as turning a clay surface into marble, with some reflection, or silver, golden, rusty,... and many other looks.
Maybe there hasn't been a focus on material props regardless of the apps. I was just thinking, since many artists use more than one tool, it might help to compare notes and techniques across applications, even share resources in some cases.
Thread: It's time to get spooky, our Halloween contest is here! ๐ป๐บ๐พ๐ | Forum: 3D Modeling
Well, not sure if it's spooky enough, but this was a model in Blender (from NASA ISS (Interior) free download, loaded into Blender, exported to Wavefront OBJ, increased the Pm 0.0000 to 0.2 (metallic) for slight environment reflection, then loaded to Howler 2024, and used the image as forcefield for a few foliage particle brushes, finally added some lens flares.ย Scary huh? ;-)
You don't need a scary clown to imagine seeing one hiding there ;-)
Thread: Carrara is Primary in my workflow | Forum: Carrara
hm, no Howler? what do you use to throw keyframed filters across an animation for intelligent post work?
After Effects?
Thread: Do you remember these? | Forum: Carrara
Thread: Carrara 8.5 Pro on sale right now 2018.05.20 | Forum: Carrara
I see it at 88% off. #36. maybe you have to be logged in with your daz account to see the discount? I already have it, not a tease for me, but otherwise, definitely a good value
Thread: Effects Tab; Glare Override? | Forum: Carrara
I assume you didn't find anything on the topic in documentation? (I have memories of a printed user guide dating back to Carrara Studio 1 lol) - can't say I've even opened an online reference guide in years
Thread: Effects Tab; Glare Override? | Forum: Carrara
I haven't looked or used this lately, but logic would have me think that essentially, there is a Scene-level setting for Glare, and then each object can have their own that overrides the scene settings: It can override the scene settings (box 1), and if so, it can both do either accept glare (from the scene)(box 2) and it can contribute glare (to rest of the scene) (box 3). Something like that?
Thread: Is this possible? | Forum: Photoshop
You might consider 2 or 3 techniques (no matter which tool).
Blending: You have a face in one layer, and the other in another layer. Combine them in a variety of layer blending modes. Screen mode, percentage, texturize, additive etc...Not sure how many layer blending modes PS will gve you but there are usually a lot, and you can repeat and combine for more effects. Combine several subtract, divide and multiply modes etc... Try about 30+ different modes of layer blending seen in many programs (including dogwaffle even though the key 'replace' mode is not present in that case, dogwaffle is a bit limited when it comes to alpha-based per layer blending. But math operations are plentiful, there and in most other apps
Morphing: Key features from one face image are identified on first image (tip of nose, center of ech eye, left and right of each eye, tip of cheek bones, etc... identify 10-20 key elements that you can see clearly in either image. Define how each of the define key points move from one face to the other. The linear interpoltion that drags the point across and with it the neighborhood of pixels in a way that makes it transform from one image to the other. Adjust the transition level and you'll have a morph of a bit of face 1 and a bit of face2. This is what many of morphing tools let you do. The key points might get sophisticted in some cases, such as key shapes (lines, ovals,...) or how strong he attraction of a key element is (weak to strong, linear fade, parabolid, logarithmic, sudden, gaussian, fade etc...
Do both, with motion interpolated estimation. Instead of identifying a limited number of key markers, make the entire image carry key markers all over. Imagine a grid across the whole image, perhaps of square regions just 12x12 pixels in size, (adjustable but each square the same size). At the center of each sqaure, we place a motion tracker. DO the same for each image. Actually the software does it for you. Then ask it to calculate the path for each little square, each 'tile': for any tile in the first image, where did the tile move to in the second image. In some cases there may be exact matches and it may find that it didn't move at all and is at exactly the same location, most likely it will see it nearby, at a slightly or very different position. It may not find the exact same tile anywhere, so it calculates correlation patterns to see which one is the best match. A corner of an eye is still a similar corner of an eye even if slightly different. The motion estimation approach will find where things went from image 1 to image 2, and will let you set thresholds, so that it can simply to belnding of the tiles if nothing close enough and similar enough is found for a given tile. You also get to tell it how many frames of interpolation you want. Try 5, 10, 30, 100 watever.... once it has an idea of where each tile moves to from frame 1 to frame 2, it allocates memory for the in between frames (5 frames, 10, 30...whatever you asked for) and starts interpolating, moving the tiles from point A to point B and also interpolating their pixel values (in either RGB space, 3 channels in parallel) or in HSV space, or other. Remember, most tiles didn't find their identical counterpart, they'll interpolate to point B as they also morph (move) to it. It's sometimes called the wiggle factor or something else indicating it is related to noise, and how much of it the algorithm detects in the tile. Too much noise and it will give up and resort to blending. (for example when looking at fast moving foam on top of fast eratic ocean waves - next to impossible for motion tracking ).
In the end, you'll have motion estimated interpolation (aka motion prediction). You'll see a whole bunch of frames that start the from known image 1 and finish at known image 2. The rest are interpolated images that show movement has happened on all or most tiles of the images.
Then you pick the image that fits your needs. If you can find one. This technique is tricky if there is too much movement between the two start and end images. It works great for things that change a little all over, such as due to video noise, or just small changes. In many implementations you can adjust the size of tiles, and the distance of how far to look for possible candidates.
There will be fine border regions that can't be used (because it has no idea what lies outside of the image). It can guess. Just crop it off. Most automated Motion Prediction syste,s crop automatically and resample/resize the remaining image to the original size. You'll notice a little bit of zooming in. You can add an extra region around the original images too, before letting it do the M.E., so that it may feed off of the same outer pixels. Or whatever you add to the outer 'frame'. It may hep to have the same frame in either of the two images.
I don't know if Photoshop has such a feature. It is something you see more with animation, video. PS does have quite an arsenal of tools for handling animation nowadays. But still, motion estimation is often the thing of a separate plugin at $99. Look around if you find one for PS. I assume you're looking to do this (or at least significant arts of it) in PS since you posted here.
If you'd like some examples of the technique #3, look at the Motion Prediction module of project Dogwaffle (sorry, WIndows only). http://www.thebest3d.com/howler/7/motion-prediction-module.html
This version is not running on GPU yet, but it is multi-threading enabled, so the more CPU cores you have the faster. It is very compute intensive. Here's some numbers to give you an idea.
http://www.thebest3d.com/howler/7/motion-prediction-mania.html
I hope tis helps. If you find a motion estimation technique suitable to your tools, I'd love to learn about it.
Thread: Got my new site up and running | Forum: Bryce
very nice, I love the space scene upon arrival.
do you use Dogwaffle, by any chance? Definitely a site we'd mention in the newsletter to help promote it. Ping me if curious.
Thread: 3D Text for Zoo Plaques | Forum: Photoshop
staigermanus posted at 6:14PM Sun, 16 July 2017 - #4309966
by the way on that last note, I am curious: I see the braille letters as punched in, not sticking out. Is it my perception that's flawed, is the lighting playing a trick on me? I just can't imagine the braille dimples to be going into the material, I always assume the way to feel them by touch is for them to stick out. Is that a glitch in this case?
ah now I see, yes it was playing a trick on me. I always assume the lighting to come from North, North East, above, or let's say from 9am to 3pm, but this one is from below and so the inverse size was lit, making me think it's a hole rather than a dimple. My bad.
Thread: 3D Text for Zoo Plaques | Forum: Photoshop
by the way on that last note, I am curious: I see the braille letters as punched in, not sticking out. Is it my perception that's flawed, is the lighting playing a trick on me? I just can't imagine the braille dimples to be going into the material, I always assume the way to feel them by touch is for them to stick out. Is that a glitch in this case?
Thread: 3D Text for Zoo Plaques | Forum: Photoshop
SNARKLER posted at 6:08PM Sun, 16 July 2017 - #4309947
Cool. Thanks. Will experiment.
Please be sure to share your results or at least various steps leading to them if you can't share it all. I'm curious about how you do it in PS. I have a general sense of some of the steps and the details will vary depending on which version of PS one uses.
I'll also try a few more techniques to produce an even closer mimic to what you showed first. Including the braille, since there are online converters.
Thread: 3D Text for Zoo Plaques | Forum: Photoshop
of course if you explore the finalprinting side of it with 3D Builder on Windows 10 you'll notice it has some abilities for adding embossed text built in with it so you might only need to do the tag's geometry in your imaging tool, and then finish there. Me, I prefer however to visualie various renderings before going to print. PS has a whole gammut of rendering options in 3D. You should easily get to something like this in straight PS. Just explore the 3D menu options, especially greyscale. For example start with black background, draw the tag's shape and fill it mid-grey, add a punched hole, add some raised (brighter) text. Blur it slightly for raised transitions.
Thread: 3D Text for Zoo Plaques | Forum: Photoshop
I'd think best to use a greyscale image. PS does that. DOgwaffle and others too. Then you can essentially start by typing thetext with text tools, white on balck. White is high raised elevation, black is low. Various grey levels inbetween by way of blurring or smoothing the greyscale elevation map. Want to add old weathered look for a dino tag? add erosion, noise,...
The load the greyscale and turn it into 3D. Add thickness. to make it a solid. Most 3D printers need it as a solid.
If you're on Windows 10 you have an app included that can take your greyscale and add thickness to it. I'd think PS can do it too. Then export to STL or OBJ and use the 3D app on Windows 10 to load and print. The app is called 3D Builder, included with WIndows 10. The Creator's edition has painting options too if you want to colorize deep places for a rusty look for example, when it's a metallic plaque, or other aging effects for cardboard texture, wood etc...
Thread: Content Creation for DAZ Studio & Poser - Carrara or Hexagon? | Forum: Carrara
Xatren posted at 8:06AM Thu, 03 December 2015 - #4242125
staigermanus posted at 10:30PM Wed, 02 December 2015 - #4242117
This stuff is modeled in Hexagon, and rendered in Modo.....
Those are very nice. One question though. Why not just model stuff in Modo? Personal preference?
Yes indeed. A long while ago he was using Amapi. Hexagon was the natural transition after Amapi, with numerous tools and features similar. I guess once you learn a tool that does well for what you need, once you have a good workflow, there's no need to re-invent the wheel. So he still uses Hexagon for modeling new characters/critters/props, and renders in Modo. The end product goes to large printers for vinyl decals. These are mainly made for vinyl stickers.
This site uses cookies to deliver the best experience. Our own cookies make user accounts and other features possible. Third-party cookies are used to display relevant ads and to analyze how Renderosity is used. By using our site, you acknowledge that you have read and understood our Terms of Service, including our Cookie Policy and our Privacy Policy.
Thread: Is there a thread on just materials, and editing/changing materials, not th | Forum: 3D Modeling