kuroyume0161 opened this issue on Feb 01, 2005 ยท 22 posts
kuroyume0161 posted Wed, 02 February 2005 at 1:33 PM
Well, I have two approaches so far. One involves 'de-embossing' the image. This will require proper embossing code - so far nothing useful found. My programming arena does not include 2D graphics much - last time I played with 2D graphics was DOS Modes. :) The other involves looking at the normal vectors as a function of 'lighting' or the directional change of the normal away from the median (pointing directly off the image face) in each of the lighting directions (X and Y). I don't see how and why integration/derivation plays a part in the process of creating/converting normal maps - even if they are of the type formed from hi-res geometry. Unless someone is going to provide technical references to this process and why it's relevant, it isn't worth pursuing. I wouldn't mind going with the normal vectors (vector math I can handle), but how to interpret normal, normalized, direction vectors as height changes is another story. And, yes, I realize that BUM isn't a real format, just Poser's odd version of a normal image map (no matter the file format utilized). Why couldn't CL/Metacreations have used industry standard normal maps (whether they be object/global/tangent space)? Do they have something against standards and normality? ;)
C makes it easy to shoot yourself in the
foot. C++ makes it harder, but when you do, you blow your whole leg
off.
-- Bjarne
Stroustrup
Contact Me | Kuroyume's DevelopmentZone