One reason why Unix quotas are generally not maintained and imposed by path is that it's a lot easier to update quotas as things are created, deleted, modified, and so on if the only thing that matters for who gets charged is some attribute of the inode, which you always have available. This was especially the case in the 1980s (when UCB added disk quotas), because that was before kernels tracked name to inode associations in RAM the way they generally do today. (But even today things like hardlinks raise questions.)
I've made a PDF of one of my bike club cue sheets from 2014 and put it at https://www.cs.toronto.edu/~cks/tbn/tbn-gatineau-gallop-2014... ; the GPS route that is more or less equivalent to it is https://ridewithgps.com/routes/28370340 (there may be minor differences because the route is more modern than the cue sheet, but it will give you orientation). The cue sheet is written for a group ride (where the group will stay together) and for people familiar with Toronto, so it might be challenging to follow solo unless you were already somewhat familiar with the ride (as the ride leader is expected to be).
The cue sheet is structured the way it is because it's expected it will be folded in half horizontally to fit in a map/cue sheet holder, and perhaps vertically as well (if people have a small holder; you fold vertically first, initially hiding the entire right column since you only need it after lunch, then horizontally). Cue sheet holders typically let you flip them up to see the back, so the exact division of a horizontal fold doesn't have to be perfect. Each numbered section covers a (relatively) distinct section of the ride to make it easier to keep track of where you are in the cue sheet overall.
Cue sheets for different circumstances need different sorts of structure. For example, for some cue sheets it would be quite important to include the distance (cumulative and/or from the previous cue). In others, such as this one, individually numbered cues and distances to them are mostly distractions.
(I'm the author of the linked-to blog entry, and as you can tell I have Opinions on cue sheet design.)
In 1989, the costs appear to have been significantly different, although on a casual search I don't see list prices for eg then-older Sun models like the 3/60. A brand new Sparcstation 1 (also 1989) was far more expensive than an NCD 16 or NCD 19, and a diskless Unix workstation would need more server support and disk space than an X terminal. Today is a different thing, but that's because PC prices have dropped so dramatically.
Pricing and charging for storage inside an organization is always ultimately a non-technical decision that has to balance who pays for it versus the consequences of it not being paid for. This is especially the case within organizations like universities, which have unusual funding and funding patterns (for instance, one time capex is usually much easier than guaranteed ongoing opex). We (the people providing the disk storage) know that there are ongoing costs to doing so, but the non-technical decision has been made to cover those costs in other ways than charging professors on a recurring basis.
Oh yes, I recall the fun of that, from many years ago.
One of my favorites remains when we were prototyping our next gen of servers for some compute next to quite a lot of disk. We had a prototype design, but we weren't done testing it, and someone needed a grant spent _now_, so they bought that design.
Unfortunately, that was the Dell R715/R815 family, which you may recall, had some...unique performance characteristics, so we didn't go with those for the final model, but had to deal with the support of them thereafter.
Interfaces aren't bit-packed and they force storing all values as a separate allocation that the interface contains a pointer to (escape analysis may allow this separate value to be on the stack, along with the interface itself). I believe that Go used to have an optimization where values that fit in a pointer were stored directly in the interface value, but abandoned it, perhaps partly because of the GC 'is it a pointer or not' issue. In my view, some of what people want union types for is exactly efficient bit-packing that uses little or no additional storage, and they'd be unhappy with a 'union values are just interface values' implementation.
A separate allocation is not forced. The implementation could allocate a block of memory large enough to hold the two pointers for the interface value together with the largest of the types that implements the interface. (You can't do that with an open interface because there's no upper bound, but the idea here is to let you define closed interfaces.)
In cases where there is a lot of variance in the size of the different interface implementations, separate allocations could actually be more memory efficient than a tagged union. In any case, I'm not sure that memory efficiency is the main reason that people miss Rust-style enums in Go.
The problem with allocating bit-packed storage is that then you are into the issue where types don't agree on where any pointers are. Interface values solve this today because they are always mono-typed (an interface value always stores two pointers), so the runtime is never forced to know the current pointer-containing shape of a specific interface value. And the values that interface values 'contain' are also always a fixed type, so they can be allocated and maintained with existing GC mechanisms (including special allocation pools for objects without pointers and etc etc).
I agree with you about the overall motivation for Rust-style enums. I just think it's surprisingly complex to get even the memory efficiency advantages, never mind anything more ambitious.
The bigger problem is mutability. Any pointers into the bit-packed enum storage become invalid as soon as you change its type. To solve this you can either prohibit pointers into bit-packed enum storage, which is very limiting, or introduce immutability into the language. Immutability is particularly difficult to add to go, where default zero-values emerge in unexpected places (such as the spare capacity of slices and the default state of named return values)
In a lot of environments, you can at least choose to restrict what networks can be used to manage equipment; sometimes this is forced on you because the equipment only has a single port it will use for management or must be set to be managed over a single VLAN. Even when it's not forced, you may want to restrict management access as a security measure. If you can't reach a piece of equipment with restricted management access over your management-enabled network or networks, for instance because a fiber link in the middle has failed, you can't manage it (well, remotely, you can usually go there physically to reset or reconfigure it).
You can cross-connect your out of band network to an in-band version of it (give it a VLAN tag, carry it across your regular infrastructure as a backup to its dedicated OOB links, have each location connect the VLAN to the dedicated OOB switches), but this gets increasingly complex as your OOB network itself gets complex (and you still need redundant OOB switches). As part of the complexity, this increases the chances an in-band failure affects your OOB network. For instance, if your OOB network is routed (because it's large), and you use your in-band routers as backup routing to the dedicated OOB routers, and you have an issue where the in-band routers start exporting a zillion routes to everyone they talk to (hi Rogers), you could crash your OOB network routers from the route flood. Oops. You can also do things like mis-configure switches and cross over VLANs, so that the VLAN'd version of your OOB network is suddenly being flooded with another VLAN's traffic.
We might be talking at cross-purposes a bit, but also it seems that you're considering a much larger scale than me, and also I hadn't really considered that some people might want to do data-intensive transfers on the management network, e.g. VM snapshots and backups.
Because of how I use it, I was only considering the management port as being for management, and it's separated for security. In the example in the article, there was a management network that was entirely separate from the main network, with a different provider etc. I guess you may have a direct premises-to-premises connection, but I was assuming it'd just be a backup internet connection with a VPN on top of that, so in theory and management network can connect to any other management network, unless its own uplink is severed. Of course, you need ISPs that ultimately have different upstreams.
In the situation that your management network uplink is down, I'd presume that was because of a temporary fault with that ISP, which is different to the provider for your main network uplink. You'd have to be pretty unlucky for that also to be down too. Sure, I can foresee a hypothetical situation where you completely trash the routes of your main network and then by some freak incident your management uplink is also severed. But I think the odds are low, because your aim should be to always have the main network working correctly anyway. If you maintain 99.9% uptime on your main network and your management uplink from another provider is also 99.9%, the likelihood of both being down is 0.0001%.
I'd also never, ever, ever, want a VLAN-based management network, unless that VLAN only exists on your internal routers and is separated up again into individual nets before it goes outside the server rooms. Otherwise, you've completely lost any security benefit of using an isolated network. OTOH, maintaining a parallel backup network on a VLAN that's completely independent to the management network, but which can be easily patched it by someone at that site if you need them to, isn't necessarily a bad thing.
But anyway, these are just my opinions, and it's been a long time since I was last responsible for maintaining a properly large network, so your experience is almost definitely going to be more useful and current than mine.
Because of our (work) situation, I was thinking of an OOB network with its own dedicated connections between sites, instead of the situation where you can plug each site into a 'management' Internet link with protection for your management traffic. However, once your management network gets into each site, the physical management network at that site needs to worry about redundancy if it's the only way to manage critical things there. You don't want to be locked out of a site's router or firewall or the like because a cheap switch on the management network had its power supply fail (and they're likely to be inexpensive because the management network is usually low usage and low port count).
The obvious advantage of using domain names and in general URLs as the package names is that the Go project doesn't have to run a registry for package names. Running a registry is both a technical and especially a political challenge, as you must deal with contention over names, people trying to recover access to their names, and so on. By using URLs, the Go project was able to avoid touching all of those issues; instead they're the problem of domain name registries, code hosting providers, and so on.
My badly communicated overall point is that I don't think it's right to say that Go started without any thought about dependency management. Instead, the Go developers had a theory for how it would work (with $GOPATH creating workspaces), but in practice their theory didn't work out (for various reasons). For me, this makes the evolution of Go modules much more interesting, because we can get a window into what didn't work.
(I'm the author of the linked-to entry. I wrote the entry because my impression is that a lot of modern Go programmers don't have this view of pre-module Go, and especially Go when you had to set $GOPATH and it was expected that you'd change it, instead of a default $HOME/go that you used almost all the time.)
It's certainly an interesting angle. It might have been more clear if the documentation and how-to's had explicitly used that sort of terminology: e.g., saying that step 1 of creating a project would be to create a new workspace directory for all the dependencies, then create a package directory (with a git tree) inside that.
There are certainly nice things that the current Go module system buys; but one thing I miss is that under the old system, if one of the packages wasn't working the way you expected, the code was right there already; all you had to do was to go to that directory and start editing it, because it was already cloned and ready to go. The current thing where you have to clone the repo locally, add a "go.work" to replace just that package locally to do experimentation, and then remove the "go.work" afterwards isn't terrible, but it's just that extra bit of annoying busy-work.
But being able to downgrade by simply changing the version in go.mod is certainly a big improvement, as is having the hashes to make supply chain attacks more difficult.
You can directly edit the downloaded code when using modules, you don’t need your go.work flow at all. I often do it while debugging particularly weird bugs or to better understand library code I’m using.
You can just edit the copy that ‘go get / go build’ download to your system. Afterwards undo the edits or re-download the module to wipe out your edits. No need to use go.work, local replaces, or anything. The files are on your disk!
From my experience the reason $GOPATH has pkg, bin src directory was they wanted to limit spread of affecting the filesystem, I could zip my go directory put it in another computer and still works as good as before, I hated Python because it couldn’t do this even pyenv have high chances of it not working correctly if changed to a different distro and most times I don’t wanna download 500M of anaconda everytime I swap.
Based on an extremely quick skim, this appears aimed only at projects that are using autoconf purely for portability across standard Unix environments. It admits up front that it drops features that people find valuable about configure, like --prefix et al and the entire feature selection cluster of options (now you have to edit Makefiles, which has various issues), and it appears to have nothing for projects that need their own checks for additional features of the environment (OpenZFS being an extreme example). If I was being unkind, I would say it's an autoconf replacement for people who don't need autoconf to start with (and don't care about --prefix et al).
There is an ecological niche for 'you don't need autoconf' (and don't care about aspects it gives you for free), just like there's an ecological niche for 'you don't need Javascript', but I don't think it's a significant one.
NFS v2 writes are all synchronous. NFS v3 added an option to make them asynchronous, along with an additional 'COMMIT' NFS operation that flushes them to storage. In theory how it works is an NFS v3 client sends some number of async writes, holding a copy of their data in its own memory, and then sends a COMMIT to flush them all. If the NFS server replies to the COMMIT with an error, the NFS client has to re-send those async writes and their data (possibly as sync writes this time around); otherwise, it can discard its copy of the written data. NFS v3 clients can still decide to send sync writes if they don't want to keep track of all of this on their end for some reason (including low memory to hold the write data locally). And an NFS v3 server can opt to immediately write out theoretically 'async' writes for similar reasons. All of this is still true in NFS v4, with I think even more elaborations on the theme.
(I'm the author of the linked-to article and I have a long-standing interest in weird NFS behavior, since we operate NFS servers.)
(I'm the author of the linked-to article.)