TomDart opened this issue on Jan 21, 2008 · 20 posts
MGD posted Mon, 21 January 2008 at 7:13 PM
Tom,
Here are my initial comments ...
In the KenRockwell site, he talks about 12 bit information, 10 bit information,
and also says, "coding to pack 14 bit linear raw data into 8 bit JPGs".
Unless we examine the actual file formats, we can't be sure if he is talking
about facts or just spouting blls*t.
This much is certain ... smaller bit depth results in poor color range. The
first scanner I owned touted 24 bit color. Ii thought that meant 24 bits
per color ... Wrong! ... it meant 8 bits per color ... and that meant that
a scan of a photo did not have the same smooth gradation of color as
the original ... no getting around it.
If you want to test that out, change the settings on your color monitor
from true color (32 bit) to high color (16 bit) to 256 colors to 16 colors
... see if you can convince yourself that the images look just as good
with fewer bits per color.
That is plain and simple NOT true. ... And with a lossy compression,
you can't get back all of the original information ... no way ... no how ...
not at all.
That won't become a reality if we keep our software up to date.
--
Martin