These blurry images look like they are composed of a couple of gradients.
Wouldn't it be possible to generate a tiny SVG or some CSS that reproduces these gradients?
In that case, basically no decoding step is needed and integration is as easy as setting a CSS style on a div or including an SVG. If the image dimensions are set from the start, the browser can even continue layouting the page before actually drawing the gradients. Letting the browser decode the gradients should also be much faster.
Someone can correct me, but gradients and large box shadows are some of the most intensive things your browser can render. There was noticeable slowdown when you do too much of that.
There certainly is an SVG thing but I don't remember the name.
I think Blurhash is the way it is because Wolt is an app and not a website, so the decoder is already loaded when you open it. For browser use, SVG would be more straightforward.
True, but we have GZip for that, and you don't need a lib to parse it. Actually, it would be nice to compare the overall size needed for both Blurhash lib + a bunch of hashes vs a bunch of SVG placeholders glued together.
I wish something like this was used for the Signal message history, similar to what WhatsApp does. As it is now, you either have an ever growing message database full of pictures you don’t need to keep around, or you start deleting them and end up with weird gaps in your message history. Keeping a few bytes of data as a placeholder for deleted images would be optimal in my opinion.
Not sure why I would want to use this instead of progressive image decoding with a (CSS?) blur filter. Instead of a blurred image suddenly flashing to a fully loaded image, the user would see the image quality gradually improve as it loads over the network.
Then again, progressive JPEG or FLIF is not supported that well anywhere, so I guess this could be the best thing right now because it seems to just require HTML5 canvas.
A blurhash is much easier to send along with the JSON data used to display the initial page. To see a blurred version of a progressive JPEG the browser must have first started to download that JPEG. When there are many JPEG's to download, it may not even have started downloading yet.
I typically store it in the DB along with the image ID (or URL). It's being displayed even before network request to the actual image has started. Works like a charm on slow networks.
Mastodon, for example, use it to defer loading of images tagged with Content Warning until the user clicked reveal. It provides a useful context for user to decide whether to view the image, while not require browsers/clients to load the image beforehand.
I am using blurhash on my photography website and it works and looks great. But given all the examples and libraries out there, more DIY work than expected was required to implement the main use case for blurhash end-to-end in vanilla JS.
What are memory and CPU costs of using this in web pages? For example, when a page with 100 images loads, won't decompressing hashes into 100 canvases take lot of resources and block the page or entire system for several seconds?
The placeholders are blurred and I assume that creating blurred images is pretty expensive and takes O(N^2*M) where N is size of the image and M is number of points.
Wouldn't it be cheaper to use blocky placeholders that take only O(N^2) time to paint?
What are memory and CPU costs of using this in web pages? For example, when a page with 100 images loads, won't decompressing hashes into 100 canvases take lot of resources and block the page or entire system for several seconds?
If you're sensible and use small input images, and therefore small hashes, it should be fine. It's not actually doing all that much 'work'. Besides, browsers are much faster than people think they are, especially for things like canvas drawing because that happens on the GPU (even in a 2d context.)
I assume that creating blurred images is pretty expensive.
The input image is converted to a hash. When the placeholder is needed the hash is converted to a gradient. Essentially it's like picking a few points in an image and then using a gradient function to fill in the spaces between the points. That's something that's easy for a computer for do quickly.
Also, as it happens, blurring an image is fast too. You can implement a Gaussian blur in a convolution filter, and that's just a simple matrix.
> browsers are much faster than people think they are
Serious question: my 2017 MacBook Air grinds to a halt on many web pages, especially ones with video animation, to the point where I can't type because it drops multiple characters. I have to use an ad blocker to make pages workable. Is this normal or is there something wrong with my machine?
Occasionally I will accidentally not have an ad blocker, and yes, the amount of CPU required by them can be enormous.
People seem to just be used to them and claim that their computers are ‘getting slower’, but really they don’t use their computer for anything but web browsing and it’s only slower because ads are getting worse and worse.
Without wishing to create a paradox, it absolutely does, yes. I've been a web dev since 1997 and I frequently have to remind myself that I don't always need to optimize things, memoize things, or throw features out based on browser performance any more. I constantly underestimate what browsers are capable of. There's a 'pandemic' of over-optimization that heaps complexity (aka bloat) into web apps unnecessarily based on the mostly wrong belief that browsers are slow.
Devs need to be careful, and they need to measure things. They shouldn't start with the assumption that something will be slow.
What's really interesting about this whole question is that HN's least favorite frontend library, React, suffers this problem. The virtual DOM implementation was necessary a decade ago when React started, but DOM manipulation has been optimized in browsers so now the vdom is actually a bit of a hinderance (React has advantages other than speed, so it's still a fine choice.) Libraries that HN likes, such as Svelte and Solid, rely on the browser to be fast, because the browser is fast.
I don't know in which world you live in but most websites are incredibly unoptimised and over-engineered. They use a huge amount of resources on expensive machines and bring consumer machines to a grind.
I wish we lived in a pandemic of over-optimization.
It's entirely possible for both things to be true at the same time. Tons of developers prematurely optimizing things that don't matter because they make assumptions about performance. Those assumptions can also mean an epidemic of not optimizing things that *do matter*.
> Devs need to be careful, and they need to measure things. They shouldn't start with the assumption that something will be slow.
I disagree. The specs of developers' machines typically so far surpass those of users as to make that assumption valid more often than not.
Further, the fail case for incorrectly assuming that a web page will be fast ranges from it being slow to it being unusable, while the fail case for incorrectly assuming that a web page will be slow is that it is slightly more responsive than expected.
> The specs of developers' machines typically so far surpass those of users as to make that assumption valid more often than not.
Billions of people are using computers that are 100 times slower than your machine in different metrics. 100 times less space on hard drive, 100 times slower hard drive, 100 times slower processor, 8-16 times less memory, etc.
But then they probably won't pay you anyway, also if you target them, you have to make compromises that will affect the overall quality.
The blurring in this case is just a natural consequence of throwing away all the high-frequency DCT components. And since DCT is used to create JPEGs there are highly optimized versions available.
I really dislike these and similar super blurry image replacements. They look ugly to me. Just use one color until the image loads, that's way simpler and doesn't look worse than these.
It seems more like a modified version of jpg compression which also uses DCT.
Why not just downsample the original image to 10x10 jpg and encode it as base64 so it can be directly used in an <img> without blocking your main JS thread to decode the image? Apply css blur if you like the soft gradient effect.
Both formats are trying to encode the parameters of cosine functions except jpg is implemented natively in web browsers, likely with SIMD instructions. Is the ~200 bytes in savings per image really worth the extra computation cost you're passing onto your users?
It's unlikely since the smallest possible jpg 10x10 jpg is just under 300 bytes. You might be able to squeeze out a few dozen bytes if you tried multiple browser supported formats (e.g. jpg, webp, png, gif) and pick the smallest one since they might be better at encoding blurhash's gradient effect.
I'm sorry, I wasn't clear about what I meant. The idea was to convert the hash to JPEG once it reached the client, so that the browser's native JPEG support could be used to display and cache the image.
In this case it looks like a key consideration is the way each data point fits into exactly 2 characters, no tricky bit manipulation going on. I'd say it's pretty clever.
Do websites still use progressive JPEGs? The blurhash is much smaller than the data needed to render a progressive JPEG's first pass, but would the progressive rendering on top of the blurhash still be worthwhile on today's fast networks?
> would the progressive rendering on top of the blurhash still be worthwhile on today's fast networks
I'm experiencing bad connections all the time, especially while travelling. Additionally, mobile plans are quite expensive in Germany compared to many other countries. So yes, I try to save bandwidth when possible, e.g. by offering AVIF and WebP in addition to PNG and JPEG files on my blog.
I've seen something similar before, where a specific JPEG header and specific quantization matrices was added to user-provided binary data to make the very low-res images. That method uses the native JPEG decoder, so it's fast.
Wouldn't it be possible to generate a tiny SVG or some CSS that reproduces these gradients?
In that case, basically no decoding step is needed and integration is as easy as setting a CSS style on a div or including an SVG. If the image dimensions are set from the start, the browser can even continue layouting the page before actually drawing the gradients. Letting the browser decode the gradients should also be much faster.
Is there a library that does that?