Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

How does the data get encoded into 10.5 bits but displayable correctly by an 8 bit decoder while also potentially displaying even more accurately by a 10 bit decoder?


Through non-standard API extensions you can provide a 16 bit data buffer to jpegli.

The data is carefully encoded in the dct-coefficients. They are 12 bits so in some situations you can get even 12 bit precision. Quantization errors however sum up and worst case is about 7 bits. Luckily it occurs only in the most noisy environments and in smooth slopes we can get 10.5 bits or so.


8-bit JPEG actually uses 12-bit DCT coefficients, and traditional JPEG coders have lots of errors due to rounding to 8 bits quite often, while Jpegli always uses floating point internally.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: