It might. The whole reason behind gamma correction is human brightness perception is logarithmic. By having more resolution at lower values, posits could correct for the reason we need in gamma in the first place, though maybe, not as well as gamma itself.
But if it mitigates error accumulation in perceptually-friendly ways, it might still have value in calculations (once there’s native support).
I've been studying color theory (its harder than you would think) on my spare time; and I did my undergraduate thesis was on posit few years ago. I think using posits may be a good idea to compress the luminosity, but I'm not sure how good it would be for the ab on a Lab-like format.
A format close to the cone fundamentals, like XYZ, could have benefits being encoded in some kind of non-standart posit.
I'll add this into the stack of things I eventually I'll look into.
1. IEE754 floats are already nonlinear. The precision is the highest around zero.
2. Bitmap images are not using floating-point values, except in some super niche use cases like GIS data, so “posits” are irrelevant for the use case you’ve posited. (ba-dum-ts)
3. Non-linear gamma is completely unnecessary for bit depths >= 16.
Floating point bitmaps are a lot more common than you think. A lot of consumers don't see them, but their software will internally use floats. Floating point bitmaps are standard practice in visual effects and animation. The OpenEXR format exists for the exchange of images with floating point bit depths up to 32.
That result happens for the same reason that 1 / 3 * 3 != 1, if you use decimal: 1 / 3 = .333, .333 * 3 = .999, which is different from 1.00.
0.1 is the same as 1 / 10, which does not have a finite representation in binary notation, just as 1 / 3 does not have a finite representation in binary or decimal notation.
This is a problem for all number systems. The true issue here is not a precision in the underlying bit representation of IEEE-754, it's that 0.1 and 0.2 aren't actually a valid IEEE-754 numbers, and so they get casted to their best approximations.
But if it mitigates error accumulation in perceptually-friendly ways, it might still have value in calculations (once there’s native support).