Sun, Jan 12, 9:54 PM CST

Renderosity Forums / Photography



Welcome to the Photography Forum

Forum Moderators: wheatpenny Forum Coordinators: Anim8dtoon

Photography F.A.Q (Last Updated: 2024 Dec 31 10:42 am)



Subject: More on B&W Conversions


zhounder ( ) posted Wed, 07 May 2003 at 11:54 PM · edited Sat, 07 December 2024 at 12:32 PM

I have been thinking about how a digital camera takes a B&W image compared to a color image. After studying CCD?s and CMOS chips on the web, as well as monitors I have come to a few conclusions that are not in line with nplus?s statements about taking images in B&W mode. First lets look at the digital camera itself compared to a film camera. A digital camera captures light as it sees it. This is in color. The CCD is not changed when the mode is changed to B&W, sepia, infrared, or any other ?modes?. What is changed is the processing of the image itself. The fact remains that the CCD (or CMOS) chip is still seeing the image in color. The processing of the image is what is changed to show the selected effect. In this case it is changed to B&W. The fact remains that the CCD is capturing a color image not a B&W one. The process that changes the image to B&W on our display device is software within the camera itself. When using standard color captures and then post processing them to B&W we are only changing the methods and software used to alter the image. Whether you change the image in the camera or in post processing software is irrelevant, it is still a conversion of a color image to a B&W or grayscale image. Here is why. Our media chip in a digital camera captures light and converts that light into an electrical charge that is then converted into a value by the software in the camera. The software then takes the color value and translates that value into a color value. The color value is then matched to a range within the palette that has been assigned by what mode the camera is in. The charge is matched to the closest value in our chosen palette, and then the resulting data is transferred to memory. This is the information that is then sent to our display device. The fact still remains that the value of this charged particle is the same, but the color palette used is what has been changed. The palettes used are determined by software developers not the chip. With film it works a bit differently. The film is exposed to light and this creates a negative value or exposure. There is no selection of color palettes nor is there a translation of value. Film only accepts the values for which it is constructed there are no alternatives. Color film sees a different image that B&W film. B&W film sees the light differently than color film, however, the image itself is captured differently on each surface because film perceives the image differently and hence different effects to the final image. Now lets take a quick look at monitors. When we see a grayscale image on our screens are we really seeing the color gray? Actually, no we aren?t. We are seeing what our eyes perceive as gray. In actuality the monitor you are looking at right now only has the ability to display information based on 3 colors (this is not considering LCD screens), Red, Green and Blue (RGB). Before you scream, But I have my monitor set to 16 million colors! consider this; the data sent from your graphics card includes only RGB. A standard video cable has 15 pins, 6 are devoted to color and 9 to other types of data. The 6 we are concerned with are Red In and Out, Green In & Out and Blue In and Out. (The entire pin setup can be found here http://computer.howstuffworks.com/monitor2.htm) So how do we see 16 million colors with only 3 color input? By adding saturation and hue. How red is the red, how green is the green and how blue is the blue? By changing the factors of RGB, saturation and hue we create over 16 million variations of color. Trust me on this one and please don?t ask me to do the math. What does all this mean and how does it relate to taking B&W images on a digital camera? Basically it just means that if you shoot in B&W on a digital camera you are trusting a programmer?s opinion on what palette to use, and if you convert your image in Photoshop or a similar program you are trusting your opinion on what palette to use and how your monitor converts that palette. Magick Michael


MzQt ( ) posted Thu, 08 May 2003 at 12:12 AM

I am in awe of your ability to retain all that and harder yet, type it out so we can understand it. I truly appreciate all that information as it does make sense. Thanx for saving me a whole lot of reading on the topic, as it may have cut into my photography time and we can't have that ;)


Misha883 ( ) posted Thu, 08 May 2003 at 7:03 AM

Neat technical view. ['Chelle knows I like this sort of thing.] NP did have an interesting point, that with conventional CCD's the R, G, and B sensors are spacially separated, so it should be possible to extract somewhat more resolution. [Newer sensors, like the Fovion, are different.] Exactly how to do this is not real clear, as the light has already been filtered by some sort of three-color "shadow mask." But there is already quite a bit of mathematics going on inside these cameras for interpolation across the three sensors, so maybe something is possible. [I'm hoping someone with a digital camera posts some comparison shots using both modes...] The points you make about color, and the subjectiveness of the palette, are very true. Quite a lot is opinion and approximation, even with analog film. [I'm typing this on my slightly green monitor, which is right next to my slightly blue monitor. The brain adjusts.]


zhounder ( ) posted Thu, 08 May 2003 at 8:18 AM

Misha, I would say simplification rather than approximation. Keep in mind that the CCD sensors only pick up what it sees as light and converts that to an electrical impulse, it actually does no conversion at all. All conversion is done post process. So technically, the image is captured in color. Converting that electrical impulse to represent color, RGB or B&W is done outside the CCD sensor. More on CCD and CMOS sensors can be found here: http://electronics.howstuffworks.com/digital-camera3.htm You wanted some test shots. The next two posts are my test shots. The originals can be found at www.zhounder.com/test/test.zip [3meg file] Magick Michael


zhounder ( ) posted Thu, 08 May 2003 at 8:22 AM

file_57413.jpg

Color test shot


zhounder ( ) posted Thu, 08 May 2003 at 8:26 AM

file_57414.jpg

B&W test shot


Michelle A. ( ) posted Thu, 08 May 2003 at 10:03 AM

Hmmm.....what happened to the B&W shot?....there is almost no distinction in tone between the red/green/blue...

I am, therefore I create.......
--- michelleamarante.com


zhounder ( ) posted Thu, 08 May 2003 at 10:08 AM

I am sure the white paper of the book has something to do with that.


bsteph2069 ( ) posted Thu, 08 May 2003 at 1:57 PM

Yup.Yup...I agree with Michelle and Zounder. I do wonder about what Zoundersr camera decided was the lighter image. Lets face it although cameras in general simply record what they "see" they also have to make a basis for what is zero ect. for backlighting modes, focusing ect. I wonder if this depends on the camera. Here is how I see it so to speak. According to Nplus as I recal the data for a b/w image should be sharper because less data is required for a b/w image and the camera unfortunately I don't think the camera will in general decide to retain more information of the image because less specific information is required. OK. Let me try this again. If a picture requires lets say for simplicity 30units of information and it is recorded as RGB then roughly for the sake of argument it will translate to 10units per color. What Nplus was saying is that if one takes a B/W picture one will then have instead 15units of info per color ( 15 units for black and 15 units for white ) What Zounder is saying is the camera will still save the info as 30units of RGB data it's simply has converted the data to some shades of color which look to us as b/w. ( EG. 100% Red, and 100% Blue, and 100% green is black and 0%Red, 0%Blue, and 0%Green is White so I guess 50%Blue and Green and Red is Grey) OK here is the rub. Depending on what setting of focus frame or feild. My camera gives me an additional 20% of resolution! One setting interlaces more veritcal scans from the cmos than the other! So I do wonder if Nplus is possible right. Bsteph


MzQt ( ) posted Thu, 08 May 2003 at 2:27 PM

Bsteph, I followed everything up to the last paragraph. I have a few questions, and please ignore my ignorance for I am new to all of this. #1. What is CMOS on a camera? #2. Is it your opinion that only your model of camera will give this extra 20% resolution on some settings, or do you think most models will do that? [I have a Canon Powershot A40 (2mp)...no snickering allowed ;) ] I am quite interested in b/w photography, so I'm just trying to figure out what mode is better to photograph in. Converting is not a problem for me as I enjoy having control over the different levels and achieving the results that best suite my tastes. If I could achieve a sharper more detailed picture by photographing in b/w mode then I may perfer to do it that way. Or do you most of you think the difference is too hard to detect for the average viewer?


DHolman ( ) posted Thu, 08 May 2003 at 3:10 PM

Hmmm...this is interesting. In order to make a decent b&w image you will need to use R, G and B together (otherwise you'll have dropout color - for instance, you don't use the R channel ... anything that is red in your scene will lose some or all detail). I think it may be possible that Mike and NPlus are both right; but that it may depend on your camera. If you think strictly in a 3 array/pixel arrangement (1R, 1G, 1B for each pixel in the final image) then you can't get any more detail in grayscale than you can in color. However, I believe most medium priced and higher digital cameras add a 4th element in so that is's G-R-G-B. Now, the question becomes, can the camera in some way break apart (not physically) this group of 4 that it uses for color mode and use the groups in a slightly different configuration for grayscale? shrug -=>Donald


bsteph2069 ( ) posted Thu, 08 May 2003 at 3:27 PM

file_57415.jpg

1. Cannon makes nice cameras and a powershot does seem to be a nice entry. You get a good amount of resolution and zoom for you wallet in my opinion. 2.Personally I'm still using my Sony Mavica with it whopping ( 640 X 480 ) =;-P ( But it does have a 10X zoom!!! ) 3. I think it may depend on the camers ( the resolution changes ) 4. Regarding the differences between converted to b/w and photographed b/w and actual rgb you be the judge. Personally I think at my cameras resolution it does not make a difference. 1.st color.


bsteph2069 ( ) posted Thu, 08 May 2003 at 3:29 PM

file_57416.jpg

This is the converted greyscale. Via Microsoft Photo Editor ver. 3.0


bsteph2069 ( ) posted Thu, 08 May 2003 at 3:31 PM

file_57417.jpg

Finally PHOTOGRAPHED as b/w.


doca ( ) posted Fri, 09 May 2003 at 7:05 PM

Interesting thread. I will have to add some of my own images. Till then, since I have to say something regardless of rather I know what I am talking about or not, there are 2 points I want to make. 1. If I take a simple color image, say a plain leaf against a solid blue sky, I get a much smaller file size than if I take a many colored busy shot. Very much smaller. If the file size is smaller, how can there be as much information in one file as there is in the other? I guess to really test this, I would need to go to tif instead of jpg, I may try that and see what happens. But, if I do and find that the b/w file is smaller than the same color file, can we assume that there is less information. So, if I take a white ball on a black background in color and the same shot as b/w and one file is smaller than the other, then the actual number of light pixels captured is less and thus the definition would, I suppose, be less? 2. Never underestimate the software element. There may actually be very different approaches in how one camera handles greyscale .vs. another and that difference may yeild significant results. Of course the final result and judgment is based on the output device. As you point out a monitor is using rgb to produce shades of black and white and differnet monitors will do that with different results. Same is true with printers or any other output device. I actually read somewhere that taking a color photo as a negative and then converting it on the computer later produced better results than taking it the normal way. I still haven't figured that one out either.


DHolman ( ) posted Fri, 09 May 2003 at 8:49 PM

Doca - 1) File size, when using a compressed image format (like JPeg) is not an indication of resolution. The way jpeg works, if there are large areas of a single color, the compression works much better than if there was a lot of different colors in the same area. 2) No software can add detail that was not seen by the original image capture. The maximum resolution of your image is dictated by the physical properties of the image array in your camera. The software, can, however do some things that can give the perception of better resolution ... sort of digital "slight of hand" to fool your eye. -=>Donald


Privacy Notice

This site uses cookies to deliver the best experience. Our own cookies make user accounts and other features possible. Third-party cookies are used to display relevant ads and to analyze how Renderosity is used. By using our site, you acknowledge that you have read and understood our Terms of Service, including our Cookie Policy and our Privacy Policy.