Hacker Newsnew | past | comments | ask | show | jobs | submit | ecnahc515's commentslogin

Seems like the enactor should be checking the version/generation of the current record before it applies the new value, to ensure it never applies an old plan on top of an record updated by a new plan. It wouldn't be as efficient, but that's just how it is. It's a basic compare and swap operation, so it could be handled easily within dynamodb itself where these records are stored.


Also it's pretty likely it took less time than that to get an idea, but generally for public updates you want to be very reserved, otherwise users get the wrong impressions.


Sure they definitely were using Docker for their own applications, but also dotCloud was itself a PaaS, so they were trying to compete with Heroku and similar offerings, which had buildpacks.

The problem is/was that buildpacks aren't as flexible and only work if the buildpack exists for your language/runtime/stack.


If the the OP is the author, did consider filing a bug with errcheck? It should be possible for errcheck to check if the comparison is being done within an `Is(err error) bool` method and skip the warning in that case, or even better: it could check if your using `errors.Is` within an `Is` method and warn in that case!


The linter in the post is `err113`. `errortype` does already warn:

https://github.com/fillmore-labs/errortype#errortype


Ah right, wrong linter. Thanks for confirming!


Cronjobs often run as root. If the host has is configured to send emails when a cronjob is completed it will default to sending it to user@domain where the user is the user the cronjob runs as, and the domain is what was configured in the cron configuration.


Minor nitpicky correction: cron only sends an email if there's any stdout of the job.

This is an important distinction because if you have configured mail forwarding, your cron jobs should be configured to output only on error.. then any emails are actionable.


Moreutils has a great command `chronic` which is a wrapper command like `time` or `sudo`, ie. you just run `chronic <command>`. It'll supress stdout and stderr until the command exits at which point it will print only if the exit code was non-zero.


I copied the same idea in my static collection of sysadmin utilities:

https://github.com/skx/sysbox/


Instead it requires QEMU!


I worked at the OSL as a student years ago, and it was one of the most impactful places I've ever worked at. I learned a lot, and I wouldn't be the engineer I am today without having worked there.

Since graduating, I've also hired, and worked with multiple alumni from the OSL and they're always top notch. Anyone looking for interns or new graduates with devops/SRE or SWE experience should be looking at the OSL for talent. It's not too often you can hire a new graduate with potentially multiple years of production experience, especially in devops.

In context of HN/Y Combinator, https://www.ycombinator.com/companies/coreos was a successful container/Kubernetes focused startup founded by two OSUOSL alumni, Alex Polvi and Brandon Philips, which was eventually acquired by Red Hat.

The OSL is something special.

For a list of projects the OSL helps host, check out https://osuosl.org/communities/. You might see a project you care about in that list! As an example: they provide aarch64 and powerpc VMs for a ton of projects to do their CI/builds on.


Same. I helped out/worked there out of high school(back around 2010).

One of the best experiences of my life.

I still prefer to use the OSL for my linux repos.


As someone who was a student at the OSL when Vagrant was hip, also thanks to Mitchell for creating Vagrant! We used it a ton for testing all our our configuration management.


That's 60% of the _budget_ not 60% of their time.

Also: Lance is almost certainly working more than 40 hours a week. Also, he isn't just a systems administrator. He's a mentor, fundraiser, any literally everything else that is needed to keep the lab running. There used to be more staff, but it's hard to retain qualified individuals. He's been there for 17 years, he's not doing it for the money, he does it because the OSL is important!


Oh, and since he's a public employee, you can look up the current salary and history.

https://hr.oregonstate.edu/sites/hr.oregonstate.edu/files/er...

https://www.openthebooks.com/oregon-state-employees/?F_Name_...

I'll summarize it:

$107k in 2017 and $124k in 2023. I don't know about you, but someone with 17 years experience could easily be making 2-5x that depending on the company and role.


And it's $124k on the west coast, specifically western Oregon. Coastal PNW is notoriously expensive to live in - sure, Corvallis isn't Seattle, but it's not cheap, either. Folks like to balk at numbers when it comes to publicly (or FOSS-donation-ly) funded salaries, but also balk at tying context to those numbers. It happens almost every time anyone dares try to pay their bills on open-source work: a flame war over "you don't need that number, you could move to your parent's basement in Arkansas and survive on $20k instead!".


I grew up near/around/in Corvallis. $124k is quite a bit there. Food is cheap, you can find pretty cheap land/realty, etc. Overall it's pretty reasonable.

That said, $124k is not a lot for what Lance does.


Maybe it was at some point in the past? I have friends who currently live there and it does not sound cheap. Again - not Seattle bad, maybe not even Bellingham bad, but still not "124 is quite a bit" levels (from the sounds of it).

Regardless, we can definitely agree: Lance could be making a ton more elsewhere, but is a saint who cares about his work, and I appreciate his dedication!


I have indeed lived life wrong. I work in HPC as a Systems Engineer (right now, in 2025, with graduate degrees in engineering, and 25 years of systems admin / engineering experience) and do not make what this person made in 2017, much less in 2025, OR 2-5x that amount for that matter (total dream salary, geez)... at one time I was the data center manager and teaching CS classes, at the same time, working 80 hours a week.

How the heck do these people secure these high paying jobs? There is some club, and I am not in it. Sorry to rant, but that 1FTE salary is huge.


Take a look at salary reports from places like https://www.levels.fyi/2024

If you think $124k a year is high compensation for someone with 17 years of experience in Portland, your compensation expectations are way off.


Wow, I read your informative link. Where are these jobs? I went through a round of interviews last year for Sr. positions, across a number of locations in the U.S., and quite frankly, the average salary for the positions interviewed for was $80k less than most of those in the list, and $230k less than the SWE manager in the list.


The page lists the locations, and the businesses, where these jobs are placed. Unless you live on the coast (or end up in Denver/Austin), you're going to have a harder time reaching these salary numbers.


> in Portland

Corvallis


While this is great, for people claiming they can now built multi-arch images without emulation, how are you planning on doing so? As far as I know, if you want to build multi-arch images on native runners for each platform, you basically need to:

* Configure a workflow with 1 job for each arch, each building a standalone single-arch image, tagging it with a unique tag, and pushing each to your registry

* Configure another job which runs at the completion of the previous jobs that creates a combined manifest containing each image using `docker manifest create`.

Basically, doing the steps listed in https://www.docker.com/blog/multi-arch-build-and-images-the-... under "The hard way with docker manifest ".

Does anyone have a better approach, or some reusable workflows/GHA that make this process simpler? I know about Depot.dev which basically abstracts the runners away and handles all of this for you, but I don't see a good way to do this yourself without GitHub offering some better abstraction for building docker images.

Edit: I just noticed https://news.ycombinator.com/item?id=42729529 which has a great example of exactly these steps (and I just realized you can just push the digests, instead of tags too, which is nice).


Does build-push-action solve this? I haven’t used their multi-arch configs but I was under the impression that it was pretty smooth.

https://github.com/docker/build-push-action


It runs in a single job, where single job = single runner. To use two runners/jobs to build multiplatform, each will need to push an untagged image and the shas are aggregated into a manifest in a third job. Definitely doable and the recipes will come out.

Personally prefer just using Go/ko whenever possible ;)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: