This is a cool technique, and I really love the article and all the visual examples! The effort put in writing this post and making it clear, and adding all the neat examples at the end, is awesome. So this is not a fair critique, maybe not even entirely relevant, but the hard part for me, having worked in CG film, is starting from a Box filter, going even sharper than that, and being able to clearly see pixels. Good filtering hides the very existence of pixels from view, it’s exactly blurry enough to make little square shapes invisible. This technique is great for games relative to single sampling, but wouldn’t get past a lighting sup, wouldn’t suffice for doing high quality prints.
One thing I think that’s under appreciated when people talk about blurriness and sharpness is that a slight amount of visible blurriness can be much better for the overall clarity of an image than erring on the side of sharpness. I learned this accidentally many years ago doing a Siggraph video on VHS tape, you know, with the old 480i 60 fields per second with alternating odd/even fields. Certainly there’s a little nuance there that is different than 1080p LCD pixels, but I found out that over-blurring a little vertically fixed all the interlace tearing and made the image so much more clear. A friend who’d written papers on antialiasing called me to ask how I’d done it, why it was so clear, and was as surprised as I was to hear that it was blurrier than expected. Ever since then whenever I see someone trying to squeeze the last bit of sharpness out of their filter, it almost always seem to come at the cost of seeing pixels and damaging clarity. Experts will say that a Gaussian filter is too soft and you can get sharper, but for high quality filtering, I haven’t found anything else that will reliably hide the pixels and leave you looking only at the image rather than the sampling.
> Warning: this page has about 72MB of Gifs! Medium also has a tendency to not load them properly, so if there’s a large gap or overly blurry image, try reloading the page. Ad blockers may cause problems, no script may make it better. It’s also broken in the official Medium app. Sorry, Medium is just weird.
There may be a fix: vote with your choice, stop publishing on Medium, publish elsewhere?
> we ran into a curious problem. We had a sign in the game with a character’s name written across it. [...] The problem was the name was almost completely illegible when playing the game.
Engineer solution: Spend hours hacking a small amount of mip LOD biasing, and forcing anisotropic filtering on for that texture.
Designer solution: Lose the sign. Problem avoided, hours saved.
Often a game engine will have LOD biasing and filtering settings exposed to the artist at the model and/or material level. It's only engineering time if those settings are not implemented.
I would expect/hope that the designer asked both the artist and engineer if the sign could be fixed (and at what cost) before spending the time to re-design an alternative. Designers will often spend huge amounts of time doing tasks that could be greatly simplified if they only asked the relevant engineer (and vice-versa).
Yes it would be possible! There isn’t GPU hardware that does it, but it could be done in GPU or CPU software. (This is slower than GPU hardware texture filtering.)
Hard to say how much it would help the visual quality, but it’s easy to calculate the costs, which are largely memory. A map map consumes 1/3rd more memory than a texture. If you add a 2nd set of points starting at 75% zoom, then I think (napkin math) it’ll bring your memory consumption closer to 2x the original texture. https://en.wikipedia.org/wiki/Mipmap#Mechanism
You would need to texture sample the intermediate mip levels separately. All GPUs (that I know of) expect the mips to be power-2 reductions.
You are also increasing texture memory usage (storage and bandwidth) by 50%.
There is probably some quality improvement, but there is a fairly large cost. Would be interesting to see what the results look like.
Indeed, - it's a question of storage. If you imagine mipmaps being precalculated convolutions of the original texture map, then there are infinitely many such convolutions that may be appropriate for some 3D view of that textured surface.
On a related note, there is a technique called summed area tables, which allows for the rapid evaluation of the total value of all pixels in any axis aligned rectangle, with only a few additions and subtractions. AFAIK this never really caught on though, because it required the storage of much larger numbers than GPU textures could historically support.
I've heard before that watching 4K video on a 1080p display is still better than watching the same video in 1080p - is that because there's a kind of supersampling going on there?
On youtube, on a 1080p display, 4K video is much better than the same video at 1080p. This is because youtube uses a different, higher quality set of codec settings and encodes video at a higher bitrate.
In a less imperfect world, if you're watching video on a 1080p display, watching a video that is encoded at 1080p will give you the best quality for any given bitrate in any given codec.
Ideally youtube would give three setting choices: codec, (ie, if your device supports decoding VP9 in hardware, it will do that, and fall back to successively worse codecs) resolution, and high quality vs low bandwidth. But that would mean 1) encoding every video twice as many times and using about twice as much total storage. I can understand why they've combined these two separate tunable settings into one resolution+quality knob.
They do separate out codec though; afaik most videos will be available in VP9, h265, and h264. Possibly others.
I think if you watch that 4k video with nearest neighbor downsampling (i.e. the player which drops 75% of pixels and only render 25% of them, one from each 2x2 quad), you’ll get worse quality.
Fortunately, that’s not how video players normally resize the videos they play. At the very least, players are using bilinear sampling. For the exact 50% downsampling of 4k into 1080p, this means the GPU averages 2x2 quad of the source video texture into one output pixel. This averaging step hides substantial amount of codec artifacts. That bilinear sampling is even borderline free on GPUs: they have tons of VRAM bandwidth, and texture samplers are dedicated fixed-function pieces of hardware.
If you’re using Windows and Media Player Classic, the player has a preference to switch resizing algorithm, Options, Playback, Output, “Resizer” combobox.
Generally not the point. Depending on your video player you might only get aliasing artifacts (and virtually always higher resource consumption).
The point of selecting a higher definition video stream can be to circumvent bandwidth limitation, if the platform happens to compress too much what it sends by default.
One thing I think that’s under appreciated when people talk about blurriness and sharpness is that a slight amount of visible blurriness can be much better for the overall clarity of an image than erring on the side of sharpness. I learned this accidentally many years ago doing a Siggraph video on VHS tape, you know, with the old 480i 60 fields per second with alternating odd/even fields. Certainly there’s a little nuance there that is different than 1080p LCD pixels, but I found out that over-blurring a little vertically fixed all the interlace tearing and made the image so much more clear. A friend who’d written papers on antialiasing called me to ask how I’d done it, why it was so clear, and was as surprised as I was to hear that it was blurrier than expected. Ever since then whenever I see someone trying to squeeze the last bit of sharpness out of their filter, it almost always seem to come at the cost of seeing pixels and damaging clarity. Experts will say that a Gaussian filter is too soft and you can get sharper, but for high quality filtering, I haven’t found anything else that will reliably hide the pixels and leave you looking only at the image rather than the sampling.