I think if you hit full path coverage in each of them independently and run all the cases through both and check they're consistent you're still done.
Or branch coverage for the lesser version, the idea is still to generate interesting cases based on each implementation, not based solely on one of them.
If the buggy implementation relies indirectly on the assumption that 2^n - 1 is composite, by performing a calculation that's only valid for composite values on a prime value, there won't be a separate path for the failing case. If the Mersenne numbers don't affect flow control in a special way in either implementation, there's no reason for the path coverage heuristic to produce a case that distinguishes the implementations.
There's an Ubuntu box here to run things under cuda because two days trying to get cuda to run properly on Debian turned out to be the limit of my patience. For something that should be familiar it's intensely irritating as a dev system.
As an especial what the fuck are you doing, for the LTS 24.04 release that nvidia tested against, canonical decided to upgrade their kernel, without bumping their minor revision number, to one that cuda doesn't run on. Downgrading that to the kernel 24.04 originally shipped with broke zfs, which Ubuntu made a huge fuss about shipping out of the box.
Damned thing is running now (without zfs, and gnome won't start), and I think I've killed the automated updates system, but it definitely doesn't have robust wont-fall-over vibes.
So for Canonical, if you see this, don't change the kernel you've released with if you aren't also changing the version number.
I think this works. A subset of S3's API does look like a CRDT. Metadata can go in sqlite. Compiles to a static binary easily.
I've spent a mostly pleasant day seeing whether I can reasonably use garage + rclone as a replacement for NFS and the answer appears to be yes. Not really a recommended thing to do. Garage setup was trivial, somewhat reminiscent of wireguard. Rclone setup was a nuisance, accumulated a lot of arguments to get latency down and I think the 1.6 in trixie is buggy.
Each node has rclone's fuse mount layer on it with garage as the backing store. Writes are slow and a bit async, debugging shows that to be wholly my fault for putting rclone in front of it. Reads are fast, whether pretending to be a filesystem or not.
Yep, I think I'm sold. There will be better use cases for this than replacing NFS. Thanks for sharing :)
Losing a node is a regular occurrence, and a scenario for which Garage has been designed.
The assumption Garage makes, which is well-documented, is that of 3 replica nodes, only 1 will be in a crash-like situation at any time. With 1 crashed node, the cluster is still fully functional. With 2 crashed nodes, the cluster is unavailable until at least one additional node is recovered, but no data is lost.
In other words, Garage makes a very precise promise to its users, which is fully respected. Database corruption upon power loss enters in the definition of a "crash state", similarly to a node just being offline due to an internet connection loss. We recommend making metadata snapshots so that recovery of a crashed node is faster and simpler, but it's not required per se: Garage can always start over from an empty database and recover data from the remaining copies in the cluster.
To talk more about concrete scenarios: if you have 3 replicas in 3 different physical locations, the assumption of at-most one crashed node is pretty reasonable, it's quite unlikely that 2 of the 3 locations will be offline at the same time. Concerning data corruption on a power loss, the probability to lose power at 3 distant sites at the exact same time with the same data in the write buffers is extremely low, so I'd say in practice it's not a problem.
Of course, this all implies a Garage cluster running with 3-way replication, which everyone should do.
That is a much stronger guarantee than your documentation currently claims. One site falling over and being rebuilt without loss is great. One site losing power, corrupting the local state, then propagating that corruption to the rest of the cluster would not be fine. Different behaviours.
I think this is one where the behaviour is obvious to you but not to people first running across the project. In particular, whether power loss could do any of:
- you lose whatever writes to s3 haven't finished yet, if any
- the local node will need to repair itself a bit after rebooting
- the local node is now trashed and will have to copy all data back over
- all the nodes are now trashed and it's restore from backup time
I've been kicking the tyres for a bit and I think it's the happy case in the above, but lots of software out there completely falls apart on crashes so it's not generally a safe assumption. I think the behaviour is sqlite on zfs doesn't care about pulling the power cable out, lmdb is a bit further down the list.
If I make certain assumptions and you respect them, I will give you certain guarantees. If you don't respect them, I won't guarantee anything. I won't guarantee that your data will be toast either.
If you can't guarantee anything for all the nodes losing power at the same time, that's really bad.
If it's just the write buffer at risk, that's fine. But the chance of overlapping power loss across multiple sites isn't low enough to risk all the existing data.
I disagree that it's bad, it's a choice. You can't protect against everything. The team made calculations and decided that the cost to protect against this very low probability is not worth it. If all the nodes lose power you may have a bigger problem than that
It's downright stupid if you build a system that loses all existing data when all nodes go down uncleanly, not even simultaneously but just overlapping. What if you just happen to input a shutdown command the wrong way?
I really hope they meant to just say the write buffer gets lost.
That's why you need to go to other regions, not remain in the same area. Putting all your eggs in one basket (single area) _is_ stupid. Having a single shutdown command for the whole cluster _is_ stupid. Still accepting writes when the system is in a degraded state _is_ stupid. Don't make it sound worse than it actually is just to prove your point.
> Still accepting writes when the system is in a degraded state _is_ stupid.
Again, I'm not concerned for new writes, I'm concerned for all existing data from the previous months and years.
And getting in this situation only takes one out of a wide outage or a bad push that takes down the cluster. Even if that's stupid, it's a common enough stupid that you should never risk your data on the certainty you won't make that mistake.
You can't protect against everything, but you should definitely protect against unclean shutdown.
If it's a common enough occurrence to have _all_ your nodes down at the same time maybe you should reevaluate your deployment choices. The whole point of multi-nodes clustering is that _some_ of the nodes will always be up and running otherwise what you're doing is useless.
Also, garage gives you the possibility to automatically snapshot the metadata, advices on how to do the snapshotting at the filesystem level and to restore that.
All nodes going down doesn't have to be common to make that much data loss a terrible design. It just has to be reasonably possible. And it is. Thinking your nodes will never go down together is hubris. Admitting the risk is being realistic, not something that makes the system useless.
How do filesystem level snapshots work if nodes might get corrupted by power loss? Booting from a snapshot looks exactly the same to a node as booting from a power loss event. Are you implying that it does always recover from power loss and you're defending a flaw it doesn't even have?
It sounds like that's a possibility, but why on earth would you take the time to setup a 3 node cluster of object storage for reliability and ignore one of the key tenants of what makes it reliable?
"Leaking" is an unauthorised third party getting data; for any cloud data processor, data that is sent to that provider by me (OpenAI, everything stored on Google Docs, all of it), is just a counterparty, not a third party.
And it has to be unauthorised, e.g. the New York Times getting to see my ChatGPT history isn't itself a leak because that's court-ordered and hence authorised, all the >1200 "trusted partners" in GDPR popups if you give consent that's authorised, etc.
Or branch coverage for the lesser version, the idea is still to generate interesting cases based on each implementation, not based solely on one of them.
reply