Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Dynamic range is not really relevant to compression. Entropy is much more meaningful, and error images typically have a huge amount of entropy.


That doesn't make any sense. You've taken an image with arbitrary bytes, and turned it into one where the bytes are tightly centered around a small range of values.That's perfect for Golomb coding (for instance.)


No free lunches. For a lossless compressor, (Information theory bits of polygons) + (Information theory bits of remaining error) >= (Information theory bits of original image), where the > represents the possibility that the first two elements aren't perfectly separated by your process or have inescapable overlap.

I specify "information theory bits" because they aren't really what you see in the computer's RAM; they're closer to "post-compression bits". But regardless, no matter how you move the encoding around there is no escaping information theory.


Obviously. That's the definition of compression. If the image is well modeled by overlapping polygons + small residuals, the encoding will better approach the uncomputable ideal, and thus, compress.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: