_dodger opened this issue on Dec 16, 2002 ยท 12 posts
_dodger posted Thu, 26 December 2002 at 9:20 PM
At higher JPEG compresson levels there will be a slight difference. At max quality, however, there shoudn't be any, because the algorithm isn't lossy at max quality. Basically, how JPEG works is this: For each colour channel, all the even gradients are found, while edges are left as-is. These even blocks of gradient are changed from pixel information into equations describing the gradient, which are generally much smaller than an even gradient. The 'lossiness' or 'compression/quality' ratio is simply a rule that decides what an edge is. JPEGs use 16-bit colours, which means that there're (I think) 256 shades of grey available for each channel including black and white. If pixel 1A is 128 and pixel 1B is 129, then at 100% quality those two pixels become one jpeg coordinate describing a gadationn from 128 to 129 between those two pixels. Doesn't mean a lot in terms of two pixels and doesn't help at all for one, but when you have 12 pixels in a row that form an even gradient (say 128 to 140), you have seven bytes of data for those 12 bytes -- start 1A @ 128, end 12A @ 140, gradient angle 0 degrees. And when you have a 12x12 block that starts fades in an even gredient across the entire thing, then you have 144 bytes covered in 7 bytes of data.