Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Not working in linearized space is a common error in pretty much any new OSS graphics project done by people who's background is not in computer graphics, I came across in the last decade.

I think in the old days you got CG know how from a few books or you went to comp.graphics.algrithms where more experienced people would gladly explain stuff to you.

Today people watch YT videos and read blog posts produced by people who also lack these basics.

It's like an error that get accumulated. But no dithering will help diffusing it. ;)



> Not working in linearized space is a common error in pretty much any new OSS graphics project done by people who's background is not in computer graphics, I came across in the last decade.

This still happens in mature software as well, including contemporary web browsers.

Just open this image in your favorite browser and zoom out:

http://www.ericbrasseur.org/gamma-1.0-or-2.2.png


Firefox and IrfanView suck, but Blender Rules :)

https://i.imgur.com/tQPFUrx.gif


I'm not surprised. 3D software generally need to get linear colorspaces right for various shading algorithms too, like Phong.


I assure you that people made the same mistake plenty in the old days, too. Most programmers who come across an array of values assume they know what the values mean, and that stands true across time. Many paint packages and other programs still get many transforms/effects wrong due to it, and in fact some people seem to prefer some effects when not linearized.


> Most programmers who come across an array of values assume they know what the values mean

Oh yeah, let me add from the audio synthesis world that this disease is prevalent here too


In my experience it's been the exact opposite -- back in the "old days" most programmers simply didn't know about linearized space, which is why even Adobe Photoshop does plenty of incorrect calculations that way. And because there wasn't any internet, there was nobody to tell you otherwise.

These days you can at least find references to it when you look things up.


I can confirm, when I started computer graphics, I had absolutely no idea about linear space. I never had a formal education about it though, mostly random tutorials, demoscene stuff, things like that.

I think one of the reason is that in the "old days", in many cases, performance mattered more than correctness. Models were, overall, very wrong, but they were fast, and gave recognizable results, which was more than enough. And working in gamma space as if it was linear saved time and wasn't that bad. That gamma space somehow matched CRT monitors response curve was an added bonus (one less operation to do).

But things have changed, with modern, ridiculously powerful GPUs, people are not content with just recognizable shapes, we want some degree of realism and physical correctness. Messing up color space in the age of HDR is not acceptable, especially considering that gamma correction is now considered a trivial operation.


Not knowing about linear space means that people were using linear by default, right? That’s what I would assume. Early games and all the graphics I was exposed to up through college all used linear RGB, but just didn’t call it that, and of course RGB isn’t a real color space anyway. Most people didn’t know about gamma correction, and otherwise almost nobody converted into non-linear spaces or tried to differentiate RGB from something else. Color geeks at Pixar and Cornell and other places were working with non-linear colors, but I guess most people writing code to display pixels in the 70s & 80s weren’t thinking color spaces at all, they just plugged in some RGB values.


According to wikipedia, sRGB is a standard created in 1996, so yeah, it just wasn't used earlier. However at the end of the millenium you could create software that opens an image file saved in sRGB, and unknowingly apply some algorithm, like dithering, without converting it to linear space first.


There was gamma correction and other perceptually uniform-ish color spaces before 1996 and before sRGB. I was taught about the CIE set of color spaces xyY/XYZ/LAB/LUV in school and used them to write renderers before I’d ever heard of sRGB. And yes exactly right, before they know better, a lot of people will grab something non-linear and starting doing linear math on it accidentally. It’s still one of the most common color mistakes to this day, I think, but it was definitely more common before sRGB. People sometimes forget basic alpha-blend compositing needs linearized colors, so it’s a common cause of fringe artifacts. Things have gotten much better though, and quickly. A lot of game studios 20 years ago didn’t have much in the way of color management, and it’s ubiquitous now.


That is software targeting Mac and Windows. Adobe has been notoriously inept at getting color right, except for print.

Already in the old days there was Digital Fusion (now integrated as 'Fusion' into DaVinci Resolve, I think it was e.g. used on "Independence Day") and Wavefront Composer (SGI/Irix, later ported to Windows NT but I may misremember).

Also depends where "the old days" start. I got into CG around 1994 and then "the bible" was "Computer Graphics'?" from Foley et al.

And aforementioned newsgroup and also comp.graphics.rendering(.renderman)

Software that was written in VFX facilities and then became OSS didn't suffer from this as most color computations happened in f32/float, not u8/char per-channel and colors were expected to be input linearly.

Often the DCC apps didn't do the de-gamma though. So there was an issue at the user interface layer.

But in the early 2000's the problem was understood my most people working professionally in CGI for the big screen and all studios I worked at had proper color pipelines, some more sophisticated than others.

As far as OSS 3D renderers go, there were Aqsis and Pixie.

Krita was linear from the beginning, AFAIR. I.e. I recall using it for look development/texture paint on "Hellboy II" -- that was 2007 though.


Since there are plenty of knowledgeable folks here (you included) I'll pitch a naive question in hopes of learning some more:

Beyond efficiency, is there any reason to avoid bringing everything into "some wide gamut linear space using doubles to represent each channel" for the computations and then converting back to the desired color space for any final output or export? Are there other things or alternative things you can do to meaningfully increase the final quality too/instead of?


Most film production already does close to what you describe, they convert to linear in order to do editing/rendering/grading/compositing, and then convert to the desired output color space. One place to start learning about film color handling is ACES: https://en.wikipedia.org/wiki/Academy_Color_Encoding_System

Outside of stylistic choices, I think the only technical reasons to use fewer bits are space & bandwidth efficiency, meeting the requirements of a given output device or file format.

There are reasons to avoid doubles, just because they’re so big. 64 bits is unnecessary and wasteful for almost all color handling. Doubles are slow on most GPUs, where a lot of image processing has moved. 16 bits per channel is usually way more than enough for basic capture & display, especially if the output color range matches the input color range, i.e. little to no editing needed. (That ACES page says “Most ACES compliant image files are encoded in 16-bit half-floats, thus allowing ACES OpenEXR files to encode 30 stops of scene information.”) Even 32 bit floats is vast overkill for most things, but offers a much wider safety net in terms of using very small or very large ranges, and reduces the possibility of clipping error or visible degradation from quantization and rounding error when converting multiple times.

Note while a lot of cameras offer things like HDR outputs and high bit rate RAW, even the best photo cameras in the world are getting around 8 effective bits per channel signal-to-noise ratio. (I’m getting this from the SNR measurements on dxomark.com) 8 bits per channel also happens to be close to the limits of human perception, when handled carefully, i.e., not linear but a more perceptually uniform color space.


In doing so you may introduce banding artifacts by destroying the existing dithering in smooth areas of images.

You could re-dither the output, but the required amount of dither to eliminate banding artifacts is great enough to be obvious and often annoying.


Doubles are way overkill. Using something like 16 bit integers per channel is adequate even for HDR.


Nope! the easiest way to do this is to always load images as linearized Float32 color.


"I think in the old days you got CG know how from a few books or you went to comp.graphics.algrithms where more experienced people would gladly explain stuff to you."

I think you also hardly could avoid being annoyed if you got it wrong, because the dynamic range of display devices was much smaller.


Eh, because of the viewer's contrast sensitivity function scaling in linear space can give worse looking results. I'd say there are four reasons that processing is so often in a non-linear space:

1. Since our 'raw' formats are non-linear, processing in that space is what happens when you don't know otherwise.

2. It's much more computationally efficient to not convert into linear and back out again.

3. Given the low bitdepth of our image data, going in and out of linear space and doing even minimal processing can easily produce banding artifacts in smooth areas.

4. Due to the human CSF scaling in e.g. sRGB can give results that preserve the apparent structure in the image better, while a linear scale can look bad by comparison. sRGB levels also more correctly represent perceived levels, so thresholds based on sRGB ratios will work more consistently across brightness levels.

I'm sure plenty of people have seen internet comments about linear processing, went and implemented and found the results looked worse for reasons they didn't understand and abandoned it (and plenty of others who didn't notice it looked worse and crapped up their code without knowing it. :) )




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: