I'm a small time webmaster and I haven't "set up" any automation - for my shared-hosting sites, the host has it built in; and for my self-hosted sites, the web server has it built in
The problem is that this breaks down if you don't want to leak any obscure subdomains you might be using via CT-logs – shared hosting rarely supports DNS-based certificate renewals for wildcard certificates, and even less so for domains hosted by an external registrar.
(Even for a fully self-hosted system you'd still have to figure out how to interface the certificate renewal mechanism with your DNS provider, so not as easy to set up as individual certificates for each subdomain.)
> (Even for a fully self-hosted system you'd still have to figure out how to interface the certificate renewal mechanism with your DNS provider, so not as easy to set up as individual certificates for each subdomain.)
That's exactly what the new DNS-PERSIST-01 challenge is for, being able to authorize a specific system or set of systems to request certs for a given FQDN and optionally subdomains without having to give that system direct control over your DNS as the existing DNS-01 challenge requires.
Each planet has its own gimmick which throws a spanner into standard builds in its own unique way - one planet is essentially a farm where your factory is growing and processing fruits, which will rot and spoil if they aren't processed immediately -- so you need to design a factory which processes small packets at high speed without any buffering.
My modest homelab is currently running 42 unique images, and it seems "checking for updates" counts as a pull even if it doesn't download anything, and the hourly limits will kick in even if I only run `docker compose pull` once a month...
I'm happily using zed with autocomplete in rust / python / php / javascript / go -- I forget which ones were built-in and which were a one-click "I see you're opening an X file, would you like to download the X language server?" but they all work
An unfortunate side effect of github being extremely user-friendly, while debian's reportbug tool still greets "novice" users with "please enter your SMTP host" (I dread to think what questions it would have asked me if I'd selected the "expert" mode)
I've been using the Sapling CLI frontend with GitHub as a backend for all my open source projects for a few years now (I've also used mononoke and eden at work, but IMO those only apply to people with hundred-gigabyte monorepos). So speaking based on my open source work:
- I am absolutely loving the improved UI/UX for common operations - being able to do the same actions with fewer commands and fewer concepts to understand, and much more helpful error messages when I try to do something invalid
- The existence of some unique features (`split`, `absorb`, and `restack` being particular favourites -- IIRC people have created third-party scripts to replicate these commands for git, but last I checked they weren't as good, and they aren't installed by default)
- Having the commit log integrated with github (being able to see which of my branches match to which PRs, and whether the PR is unreviewed / accepted / rejected / merged)
> Sapling supports cloning, pushing, and pulling from a remote Git repo. jj also does, and it also supports sharing a working copy with a Git repo, so you can use jj and git interchangeably in the same repo
As a minor update - sapling now also supports the .git on-disk formats so that you can use git and sl interchangeably in the same repo
Hack is only PHP in a very ship-of-theseus sense - it has PHP _vibes_ but they replaced the language, the runtime, the standard library, and all of the infrastructure
(and all of them much improved over PHP IMO - especially XHP [equivalent to JSX, where HTML is a first-class citizen in the language syntax])