This isn't strictly correct: you probably mean wrt compressed size. Compression is a tradeoff between size reduction and compression and decompression speed. So while things like Bellard's tz_zip (https://bellard.org/ts_zip/) or nncp compress really well they are extremely slow compared to say zstd or the much faster compression scheme in the article. It's a totally different class of codec.
an LLM can be used to losslessly compress a string to a size equal to the number of bits of entropy of next token prediction loss over the string, by encoding the extra bits of entropy with arithmetic encoding. its sota compression for the distribution of string found on the internet
reply