Indeed there is currently no incremental update in Headway, and deployments are largely an exercise left to the reader.
For maps.earth (a Headway planet deployment), I typically rebuild the world, and then do a blue/green deployment.
I guess the one exception is for transit routing. We have individual transit zones small enough to fit into memory, which can be deployed incrementally. There’s nothing really built in about it - just another level of indirection via our “travelmux” service which redirects your routing queries to a different backend depending on mode and region.
I am trying to learn from real deployments as I design Corviont's updater for edge boxes (bandwidth caps, maintenance windows, unreliable WAN, atomic swap + rollback).
When you say transit "zones" are small enough to deploy incrementally - what is the actual artifact per zone (roughly what format), and what sizes do you typically see?
And when a transit zone dataset changes, how do you roll that out safely - do you restart/reload the backend that serves that zone, or do you bring up a new backend/version and then flip travelmux to point at it?
Transit routing is provided by OpenTripPlanner, so the deployment artifact is their OTP serialized graph format.
So it’s not really incremental with respect to the existing transit zone deployment. I just mean I can redeploy a single transit zone with the latest GTFS without having to touch the other transit zones, tileserver, geocoder, etc.
One tricky thing about maps, as they relate to privacy, is that the earth is large.
Compare that to encrypted email: if I’m sending you an encrypted message, the total data involved is minimal. To a first approximation, it’s just the message contents.
But if I want “Google Maps but private,” I first need access to an entire globe’s worth of data, on the order of terabytes. That’s a lot of storage for your (usually mobile) client, and a lot of bandwidth for whoever delivers it. And that data needs to be refreshed over time.
Typical mapping applications (like Google Maps) solve this with a combination of network services that answer your questions remotely (“Tell me what you’re searching for, and I’ll tell you where it is.”) and by tiling data so your client can request exactly what it needs, and no more, which is also a tell.
The privacy focused options I see are:
1. Pre-download all the map data, like OrganicMaps [1], to perform your calculations on the device. From a privacy perspective, you reveal only a coarse-grained notion of the area you’re interested in. As a "bonus", you get an offline maps app. You need to know a priori what areas you’ll need. For directions, that's usually fine, because I’m usually looking at local places, but sometimes I want to explore a random spot around the globe. Real-time transit and traffic-adaptive routing remain unaddressed.
2. Self-host your own mapping stack, as with Headway (I work on Headway). For the reasons above, it’s harder than hosting your own wiki, but I think it’s doable. It doesn’t currently support storing personal data (where you’ve been, favorite places, etc.), but adding that in a privacy conscious way isn’t unfathomable.
The entire planet's worth reverse geocoding data is ~120gb. The map tiles file for whole planet is also ~120gb, and they both are precompiled, so you don't need hundreds of gbs of RAM to run your local planet. It's easier than you probably think nowadays. Not mobile-size, but local server-size
I'd love it if it were easier than I think, because I spend a lot of time thinking about it! I host maps.earth, which is a planet sized deployment of Headway mapping stack (which I also maintain).
To first order, you're right on about the storage size of a vector tileset and an geocoding dataset based on OpenStreetMap. But Google maps is a lot more than that!
Headway uses Valhalla for most routing. A planet wide valhalla graph is about ~100gb of storage. It doesn't produce reasonable transit directions. Transit is an even tougher cookie.
OpenTripPlanner gives good transit routing, but it doesn't scale to planet-wide coverage. We've settled on a cluster of OTP nodes for select metro areas - each one being on the order of 5-10GB of RAM.
So, I'd say we have some of the pieces of a general purposes mapping tool that could replace Google Maps usage, which you could host yourself.
But we don't have satellite imagery, real time traffic data, global transit coverage, rich POI data (like accurate opening hours, photographs, reviews).
Do all people want all these features? Probably not, but a lot of people seem to want at least some of it and it's not obvious to me that they'll be quickly solved.
The majority of non-Chinese fabricated displays are fabricated by basically 3 companies:
1. Samsung Displays - which primarily invests in SK and ASEAN
2. LG Displays - which primarily invests in SK and ASEAN
3. Japan Displays Inc - they are JV of Sony/Hitachi/Toshiba, and primarily Japan and ASEAN
The China related data does make sense given BOE and HKC's execution, but the Taiwanese data feels like an extrapolation of AUO and Innolux's market share. If they are including Innolux's production (which includes Pioneer in Japan), then it might be overindexing Taiwanese production.
"developer" is a slightly weird way of putting it. osm.org is the contributor portal and demo site for OpenStreetMap, and yes it is not and never has been intended as an end user replacement for Google Maps and similar offerings.
The main purpose of the maps on the site is, besides showcasing some topical uses of OSM data, rapid feedback to contributors, which is something that required specific development so that can be provided with vector tiles too.
The 'late to the game' narrative seems to be a bit misplaced in any case given that OSM data has powered essentially all vector tile use outside of Google over more than a decade.
Well that's a bit of a balancing act, isn't it? "Why isn't this feature on the site, it would be more useful to me; also, that feature is annoying and useless to me" - says literally everyone.
Thus, we get a built-in editor (less friction, yay/too rudimentary, boo) and popups (OSM events, yay/Popups are now everywhere, boo). And endless discussions on "this should be more/less prominent."
I don't think there's any official edict, but the approach "I don't like this and I'll make my own version" is at the very least workable (as opposed to, say, Google Maps).
Mapping applications split up data into "tiles" so you can download only the data you are currently looking at. For example, you don't want to download the entire planet, just to look at your own neighborhood.
Historically, these tiles were literally images that the client application (i.e. web map) could "tile" side by side to cover the part of the map you were looking at. Now we refer to those images as "raster" tiles, to differentiate them with "vector" tiles.
Rather than a rendered image, Vector tiles contain the raw data that you could use to render such an image. Vector tiles allow a smaller file size and more flexibility. For example, with vector tiles you can crisply render at partial zoom levels, keeping lines sharp and avoid pixelating text. The client can also have customizable styles - hiding certain layers or accentuating others from the vector tiles.
Vector tiles are not new technology. For example, Google Maps started using them over a decade ago. So why has it taken so long for OpenStreetMap.org? One reason is no doubt a lack of engineering capacity. There were also concerns about older and less powerful clients hardware not being up to the task, but that concern has lessened over time.
OpenStreetMap also has some unique requirements. It is a community edited database, and users want to see their edits soon (immediately really). It's not feasible to dynamically generate every tile request from the latest data, so caching is essential. Still, to minimize the amount of time tiles will be stale, a lot of work went into being able to quickly regenerate stale tiles before the new vector tiles were rolled out.
Yeah, it really is not. Mapbox Vector Tiles spec came out in 2014, and they've been absolutely standard across all (non-government) web mapping for at least the last 5 years.
Could you explain why you say that? My understanding is that rasterized tiles are only a graceful degradation when necessary in MapKit, while vector is default, as the behavior and experience would indicate.
I was referring to using custom vector tiles like the ones in the post with MapKit, not Apple's tiles. The MKTileOverlay is the only way to do it and it only supports bitmap tiles.
Tiles aren't just about data selection, they're also about caching. By turning a continuous domain (any part of the world at any scale) into a series of discrete requests (a grid of tiles at several fixed scales), maps become a series of cacheable requests.
The short version is: Traditionally, Bob needed to “log in” to be able to send a message to Alice’s inbox.
With Sealed Sender, Alice gives Bob a credential that allows him to message her from now on without logging in.
Only Alice can tell that the message she received is from Bob.
There’s some subtlety around bootstrapping these credentials and preventing abuse which means that not every message can be sent as Sealed Sender, but the vast majority are. Read the blog post for the authoritative explanation.
There’s an option in the app settings to make visible which of your messages were sent without identifying your client to the server if you’re curious.
Ah thanks, okay, I'm not sure I'm missing anything in that case.
But if so, doesn't signal still know that alice and bob are communicating because it's transferring messages between them? Even if Bob doesn't log in IP B is still sending payloads that eventually get delivered to IP A, and if law enforcement later asks signal for logs they could be correlated.
Indeed, at some point in time a byte has to move from point A to point B, and unless you random VPN to a different location the source and destination IPs can be identified.
Even if they can't read it, a hostile government won't care.
There is only so much you can do against a really determined adversary thats well funded.
I just want a Signal that doesn't tie everything back to a phone number.
I arrange to tell Alice in an encrypted chat that I will be doing a drop on X url after Y time and to watch it.
Alice comes picks up the drop. Done.
PS: This is another great use for cryptocurrency. When you don’t want to use account-based charging, then you allow anyone to prepay for the resources with crypto.
Yes, dictionaries should be totally possible. However, I've never tried them to be honest because I usually only compress big files. They can be set on the (de)compression contexts the same way as with regular zstd.
Indeed there is currently no incremental update in Headway, and deployments are largely an exercise left to the reader.
For maps.earth (a Headway planet deployment), I typically rebuild the world, and then do a blue/green deployment.
I guess the one exception is for transit routing. We have individual transit zones small enough to fit into memory, which can be deployed incrementally. There’s nothing really built in about it - just another level of indirection via our “travelmux” service which redirects your routing queries to a different backend depending on mode and region.
reply